Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 3, 2007

This Week’s Finds in Mathematical Physics (Week 244)

Posted by John Baez

In week244 of This Week’s Finds, guess when the first calculus textbook was written, and in what language. Learn about Tom Leinster’s method of computing the size of a category. This generalizes the "Euler characteristic" in topology, but it’s also related to "Möbius inversion" in combinatorics. Also - hear how Heisenberg invented matrix mechanics, and how Euler might have invented the Euler characteristic while strolling the bridges of Königsberg!

Posted at February 3, 2007 4:45 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/1145

105 Comments & 8 Trackbacks

Re: This Week’s Finds in Mathematical Physics (Week 244)

BBC Radio 4’s series “In Our Time” had an interesting program about Indian Mathematics a few weeks ago, which you can still listen to from the BBC website.

Posted by: Stuart on February 3, 2007 8:20 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

John, can you write more about Heisenberg’s matrix formulation of quantum mechanics?

I like very much this thing: there’s a “question”: what happens with electrons when they change orbitals? And answer: question is inappropriate, because changing orbital is a morphism in category of states.

I didn’t get why one calls this algebra of morphisms “observables”, though.

Is there the same story with position? I mean, electron may be only in one of discrete positions on e.g. x-axis and what happens in between “moving” from one such position to another are morphisms in a suitable category?

Posted by: sirix on February 3, 2007 5:13 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

sirix writes:

John, can you write more about Heisenberg’s matrix formulation of quantum mechanics?

If you want to understand how physical processes and also observables can be understood as matrices, the beginning of this book is really good:

  • Richard Feynman, The Feynman Lectures on Physics, vol. 3, Addison-Wesley, New York, 1970.

If you want to know what Heisenberg actually did, try this:

  • Jagdish Mehra and Helmut Rechenberg, The Formulation of Matrix Mechanics and its Modifications, 1925-1926, Springer-Verlag, Berlin, 1982.

One insufficiently advertised fact is that not only quantum mechanics but also classical mechanics and statistical mechanics can be formulated as ‘matrix mechanics’! The main difference is the sort of ‘numbers’ you’re allowed to use as matrix entries.

Right now I’m trying to give a thorough explanation of this, starting in week11 and continuing in week12 and forthcoming weeks of my class on quantization and cohomology. It’ll take me a while to finish telling this story…

I didn’t get why one calls this algebra of morphisms “observables”, though.

It would be better if I called them ‘operators’. It’s a curious and important fact that these operators are used to describe both processes and also observables. For example, the operator HH called the ‘Hamiltonian’ corresponds to the observable called ‘energy’, but the operators exp(itH)exp(-i t H) correspond to the processes called ‘time evolution’.

Is there the same story with position?

The same story works here too, though position is continuous rather than discrete. One just needs ‘×\infty \times \infty matrices’, which require a little analysis to deal with properly.

Posted by: John Baez on February 4, 2007 8:07 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

“It would be better if I called them ‘operators’. It’s a curious and important fact that these operators are used to describe both processes and also observables. For example, the operator H called the ‘Hamiltonian’ corresponds to the observable called ‘energy’, but the operators exp(−itH) correspond to the processes called ‘time evolution’.”

This is the aspect I have never been told about. Physicists I talked with about quantum mechanics tended to say: “The Universe works like this: anytime you want to measure a position of electron, you ask the Universe, and He gives you one of eigenvectors of apropriate operator. That’s the way things are and nobody knows more”

But I knew there must be something more to this! Actually, there was time when I supposed that the state after measurement is a state that comes from action of the appropriate operator on the state before measurement. But this is not the way things go.

However, now I see that it was not that stupid! You seem to say that these observables are indeed somehow connected with changing states. I’d love to hear more about this. Are references above still appropriate?

Posted by: sirix on February 4, 2007 8:05 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

sirix wrote:

Actually, there was time when I supposed that the state after measurement is a state that comes from action of the appropriate operator on the state before measurement. But this is not the way things go.

It is the way things go if your operator is a projection: a self-adjoint operator pp with p 2=pp^2 = p). These correspond to ‘filters’, like a polaroid filter that only lets light of a specific polarization through. You can read about this in Feynman’s book.

But, it’s not the way things go for a general self-adjoint operator.

I too was very puzzled by this when first learning quantum mechanics. I think everybody who actually thinks about the subject goes through this puzzlement; some people give up thinking about it, while others keep worrying about it. What do operators in quantum mechanics really mean? Why do we use them to describe both processes and observables — and what about the case when the process is an observation?

Are the references above still appropriate?

Yes, you can learn a lot about this from reading Feynman and also the history of what Heisenberg did! But, you’ll have to do a lot of thinking on your own — nobody seems to come right out and give a formal theory relating ‘operators as observables’ and ‘operators as physical processes’, except for the special case of how self-adjoint observables HH give rise to one-parameter groups of unitary processes exp(itH)exp(-i t H).

Posted by: John Baez on February 4, 2007 8:57 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

It is the way things go if your operator is a projection: a self-adjoint operator p with p2 =p). These correspond to ‘filters’, like a polaroid filter that only lets light of a specific polarization through.

This feels almost like a pun. Just to verify my own understanding, you really mean “filter” in the logical sense – a subset of the poset of subspaces of the Hilbert space of states.

I’ve always been rather charmed by Jeff Bub’s approach, as outlined in Interpreting the Quantum World: the poset of (measurable) subsets of phase space lives as the idempotents in the algebra of (measurable) functions on that space. Changing phase space to a Hilbert space and the algebra of functions to linear functions we get quantum mechanics.

In both cases, though, we see what you mentioned above: that observables are special kinds of operators.

Posted by: John Armstrong on February 4, 2007 9:42 PM | Permalink | Reply to this

Observable, infinite matrices; Re: This Week’s Finds in Mathematical Physics (Week 244)

Dr. George Hockney (formerly FermiLab, now JPL) tells me that they really are “observables” because they are NOT about the actual quantum state, but instead about our knowledge of that state. A subtle and rather metaphysical distinction, but perhaps relevant in a universe where negative information seems possible in QM.

Infinite matrices seem strange to those conventionally educated, but, by chance, the first QM book I owned as a child was a translation of Heisenbergian matrix mechanics, and so I learned about matrices first with the infinite case, and only later with 2x2 and up, which I first learned through a popular publication by Isaac Asimov.

I’ve often wondered if children were taught Special relativity first, and afterwards taught Newtonian physics, if they would see the world differently. There’s a famous science fiction story about a Navaho being linguistically biased towards SR, and a film to be released this year based on the classic story about children learning 4-D via toys from the future.

Posted by: Jonathan Vos Post on February 4, 2007 9:46 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

John Armstrong wrote:

This feels almost like a pun. Just to verify my own understanding, you really mean “filter” in the logical sense – a subset of the poset of subspaces of the Hilbert space of states.

No, I was using “filter” in the way people do term in foundations of quantum mechanics. It’s an experiment that lets through the things that have some property, and not the ones that don’t. Mathematically, this corresponds to a specific subspace of a Hilbert space! Or if you prefer, the operator of projecting onto that subspace.

It is funny how this word means these different things…

Posted by: John Baez on February 4, 2007 10:29 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Well they’re not really so different, it seems. A property state (projection operator) generates a filter in the (non-Boolean) algebra of dynamical propositions. It “lets through” all propositions above the one state in the poset and filters out the rest.

Posted by: John Armstrong on February 4, 2007 11:09 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Oh, okay — so it is a nice pun: a filter in the physics sense gives a projection, which gives rise to a ‘principal filter’ in the lattice of projections.

Posted by: John Baez on February 5, 2007 7:16 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Here’s some more about quantum mechanics and the Euler characteristic of a category. Maybe someone can help me out.

Suppose XX is a groupoid. Then, the category algebra [X]\mathbb{C}[X] becomes a **-algebra in an obvious way, by setting

f *=f 1f^* = f^{-1}

for every morphism in XX, and extending ** to be a conjugate-linear map

*:[X][X].* : \mathbb{C}[X] \to \mathbb{C}[X].

This is why algebras of observables in quantum mechanics are usually **-categories. Here XX is a groupoid of ‘states’ and ‘transitions’ for some physical system. A great example is the groupoid Heisenberg was considering when he invented matrix mechanics: the groupoid of nn uniquely isomorphic objects. Then C[X]C[X] is the algebra of n×nn \times n matrices. Using the above recipe to define the ** operation on C[X]C[X], it turns out that a *a^* is just the usual adjoint of the matrix aa!

When XX is a groupoid we can talk about Leinster’s ‘weightings’ and ‘coweightings’ in a manner somewhat more palatable to the physicist, as follows.

Recall that the vector space of states H(X)H(X) consists of formal linear combinations of objects of XX, just as the category algebra [X]\mathbb{C}[X] consists of formal linear combinations of morphisms.

Recall also that a weighting is the same as a vector ψH(X)\psi \in H(X) such that

Zψ=1Z \psi = 1

where the zeta operator Z[X]Z \in \mathbb{C}[X] is the formal sum of all morphisms in XX, and 11 is the formal sum of all objects in XX.

(If XX is infinite, consult your local analyst before replacing sums by integrals.)

Now here’s where the **-algebra structure comes in: I believe that in this situation, what Leinster calls a coweighting is the same as a vector ϕH(X)\phi \in H(X) with

Z *ϕ=1. Z^* \phi = 1.

If we take the obvious inner product on H(X)H(X), where the objects of XX form an orthonormal basis, the Euler characteristic χ(X)\chi(X) turns out to equal

ϕ,1=1,ψ \langle \phi, 1\rangle = \langle 1, \psi \rangle

whenever both a weighting and coweighting exist.

It’s fun to see why these two expressions are equal:

ϕ,1=ϕ,Zψ=Z *ϕ,ψ=1,ψ. \langle \phi, 1 \rangle = \langle \phi , Z \psi \rangle = \langle Z^* \phi , \psi \rangle = \langle 1, \psi \rangle.

I don’t know what this stuff means, but it’s the kind of manipulation physicists love to do in elementary quantum mechanics! So, it should lead to something… I’m just not sure what.

Since when do quantum system come born with a special operator ZZ, which means ‘do all possible processes and add them all up’? They don’t. So, all this stuff is sort of screwy.

Maybe the operator ZZ will disappear into the formalism if we work, not with the ‘obvious’ inner product of objects, but some other inner product. Or, maybe we should really treat the coweighting not as element of H(X)H(X), but of H(X) *H(X)^*.

For example, suppose I define a new inner product

(,):=,Z ( -, - ) := \langle -, Z - \rangle

and see how things simplify. Well, one thing we see is that

χ(X)=(ϕ,ψ). \chi(X) = (\phi, \psi).

That’s sort of cute, but I’d rather get something like

χ(X)=(1,1) \chi(X) = (1,1)

since this reminds me of the formula for the total measure of a measure space:

m(X)=1,1 L 2(X). m(X) = \langle 1, 1 \rangle_{L^2(X)}.

Even if I get this stuff to work, I’m still just doing the case where XX is a groupoid, which is what Jim and I could handle before Tom came along. And in the groupoid case, the really important vector space is the one with isomorphism classes of objects as its basis — not this weird space H(X)H(X). So, it’s not clear that I’m making any progress…

Posted by: John Baez on February 4, 2007 10:11 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

This is neat. If I had more time, I wish I could do more than just post a link, e.g. actually think about it, but I’m currently playing the role of Mr Mom.

Here are some of my thoughts the first time I saw Leinster’s stuff and my initial reaction seems to be vaguely similar to what you did here.

One thing I didn’t state explicitly, but is probably obvious is that

ae a=1,\sum_a e^a = 1,

which seems reminiscent of your statement “Recall also that a weighting is the same as a vector ψ∈H(X) such that

Zψ=1

where the zeta operator Z∈ℂ[X] is the formal sum of all morphisms in X, and 1 is the formal sum of all objects in X.

Posted by: Eric on February 4, 2007 10:58 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

I’m intrigued by these connections with discrete calculus, which I first met in your previous posts and which I now understand a bit better thanks to the explanations of my friend and colleague Misha Feigin. He also pointed me towards some papers of Novikov and Dynnikov about graph Laplacians, discrete Schrodinger, etc, such as this.

When Rota started developing Mobius inversion for posets, he pointed out that it could be understood as a common generalization of two different things: (i) Mobius inversion in number theory, and (ii) solution of difference equations. Everyone remembers (i), of course, but (ii) is the relevant one here. And we want to replace the usual number line by a general directed graph (especially one underlying a category).

That’s all a bit vague, but I hope to go to a seminar on Thursday that will sharpen my thinking on this.

Posted by: Tom Leinster on February 5, 2007 3:22 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Allow me to add my three cents worth to this discussion about arrow algebras, and vector space of states!

Cent 1 : Taking the category algebra (or arrow algebra) [X]\mathbb{C}[X] of a groupoid XX is cool, but in my opinion in many cases it is unecessary and obscures the problem.

For instance : If GG is a group, one defines the loop groupoid ΛG\Lambda G as the category

(1)ΛG=Fun(Σ,ΣG), \Lambda G = Fun (\Sigma \mathbb{Z}, \Sigma G),

where ‘Σ\Sigma’ of a group refers to viewing it as a one object category (ΛG\Lambda G is a category Urs and I often talk about ). Thus ΛG\Lambda G is just GG acting on itself by conjugation. It has objects xGx \in G, and morphisms gx\stackrel{g}{\leftarrow} x for each g,xGg, x \in G, with composition given by

(2)(hgxg 1)(gx)=(hgx). (\stackrel{h}{\leftarrow} gxg^{-1}) (\stackrel{g}{\leftarrow} x) = (\stackrel{hg}{\leftarrow} x).

Anyhow, Freed was the first to realize that the Hopf algebra which quantum algebraists call the ‘Drinfeld double’ of GG is nothing but the groupoid algebra of GG:

(3)D(G)=[ΛG]. D(G) = \mathbb{C}[\Lambda G].

Suppose we start talking about representations. Then its clear that

(4)Rep(D(G))Rep(ΛG), Rep (D(G)) \cong Rep (\Lambda G),

that is, the representation category of the algebra D(G)D(G) is the same as the representation category of the groupoid Rep(ΛG)Rep (\Lambda G). (By the way, this stuff is all done here).

To me, the groupoid picture is much simpler! For a representation of ΛG\Lambda G is manifestly just a vector space V xV_x sitting above each xGx \in G, with an linear map V xV gxg 1V_x \rightarrow V_{gxg^{-1}} sitting above xgxg 1x \rightarrow gxg^{-1} in ΛG\Lambda G.

Of course, a reprsentation of the algebra D(G)D(G) can also be thought of in this way - but only once you’ve applied projection operators, etc. Why bother?

Its the same as with representations of quivers. Nobody I know likes to think of a representation of a quiver QQ as a representation of the quiver algebra [Q]\mathbb{C}[Q] - rather, they think of it as a representation of the free category associated to the quiver QQ.

Its kind of trivial, but its important, because in the groupoid picture all sorts of geometrical intuition is immediately available. One can talk about cocycles, twistings, sections, etc. which brings me to…

Cent two : John spoke about the space of states H(X)H(X) as the vector space of linear combinations of the objects of XX. I would venture that perhaps its nicer to think of H(X)H(X) as the space of sections Γ(X)\Gamma (X) of the trivial line bundle with trivial connection over XX. Again, all these ideas are borrowed from this.

Cent three : When one thinks about how the arrows act on the states, I am reminded of some stuff I read by Chris Isham : “A new approach to quantising space-time : III. State vectors as functions on arrows”. I think his ideas are cool, and definitely relevant to the stuff we talk about here! (Also his other papers.) One of the interesting things he defines is an arrow field on a category XX : this is just a selection A(x)A(x) of an arrow xA(x)x \stackrel{A(x)}{\rightarrow} for each object xXx \in X. Note that you can compose two arrow fields, so that the collection of arrow fields form a monoid Arr(X)Arr(X) - which is a group if XX is a groupoid.

Anyhow, clearly Arr(X)Arr(X) acts on H(X)H(X)… just something to think about. Another thing which acts on H(X)H(X) is of course Aut(X)Aut(X) - which is more relevant when H=Γ(X)H = \Gamma (X) for a non-trivial connection on XX.

Posted by: Bruce Bartlett on February 8, 2007 12:22 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Bruce wrote:

Nobody I know likes to think of a representation of a quiver QQ as a representation of the quiver algebra [Q]\mathbb{C}[Q].

I seem to recall this viewpoint is pretty common, and is used here:

David J. Benson, Representations and Cohomology I, Cambridge U. Press, Cambridge 1991.

However, I agree with you that working with quiver algebras — or more generally category algebras — is a bit feeble. It’s actually better to work with category algebroids, where we take linear combinations of morphisms with the same source and target, getting a new category whose hom-spaces are now vector spaces.

Posted by: John Baez on February 8, 2007 1:54 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Woops! Hello I knew I was a little out of my league when I wrote that.

Posted by: Bruce Bartlett on February 8, 2007 2:41 AM | Permalink | Reply to this

Isham’s formulation

Cent three : When one thinks about how the arrows act on the states, I am reminded of some stuff I read by Chris Isham : “A new approach to quantising space-time : III. State vectors as functions on arrows”.

Many thanks for mentioning this!

I was vaguely aware of this work, but have not really read it. I will do so, it is very closely related to what we are talking about.

In particular, Chris Isham’s notion of an arrow field that you mention is directly related to the idea of propagators and the difference between UU and ZZ that I mentioned above.

Isham’s arrow fields are more or less one more way to talk about special “short” arrows in your category, the morphisms of which you want to interpret as paths.

If we were in a SDG context, we would take a morphism to be any path in our manifold, and, and then we would label those morphisms as special which go straight between two infinitesimally neighbouring points.

If we require Isham’s “arrow fields” to take values in these “short” paths, then an “arrow field” becomes precisely a vector field!

Moreover, his momentum operators then coincide exactly with the standard notion of momentum operators.

That’s why I kept emphasizing categories generated from graphs and interpreting the graphs’s edges as “short” morphisms:

take your underlying graph to be that of ordered pair of infinitesimal neighbours in the synthetic description of a differentiable manifold MM.

Then the category “generated” from this, and if we allow morphisms consisting of infinitely many edges, is, intuitively at least, the category whose morphisms are paths in MM. Small paths are those connecting two infinitesimal neighbours.

Posted by: urs on February 8, 2007 11:08 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Bruce wrote:

When one thinks about how the arrows act on the states, I am reminded of some stuff I read by Chris Isham : “A new approach to quantising space-time : III. State vectors as functions on arrows”. I think his ideas are cool, and definitely relevant to the stuff we talk about here!

Certainly. Thanks again for recalling that!

Some discussion of how this relates to the stuff I had in mind is now available here:

Isham on Arrow Fields

Posted by: urs on February 8, 2007 7:27 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

This paper

Ihara zeta functions for periodic simple graphs

has a curious expression relating the Euler characteristic, the graph Laplacian, and a zeta function for finite graphs.

The following paper (among other things) makes an interesting statement that is probably obvious to everyone besides me and let’s me guess at the motive behind a cryptic comment Urs made in that other discussion on Euler characteristic of categories.

A Riemannian approach to graph embedding

The statement was: “The eigenvalues of the Laplace–Beltrami operator can be used to determine both the volume and the Euler characteristic of the manifold.”

Posted by: Eric on February 5, 2007 5:16 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Since when do quantum system come born with a special operator ZZ, which means ‘do all possible processes and add them all up’? They don’t.

They do: the path integral! :-)

In this entry I tried to describe a way to obtain the quantum propagator using something very similar.

Posted by: urs on February 5, 2007 11:00 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Since when do quantum system come born with a special operator ZZ, which means ‘do all possible processes and add them all up’? They don’t.

They do: the path integral! :-)

In this entry I tried to describe a way to obtain the quantum propagator using something very similar.

I’ll expand on that a little:

Consider a system with a set conf \mathrm{conf} of configurations, which, in each time interval τ \tau may jump along any one of a collection of oriented edges E={(ab),(cd),} E = \{ (a \to b), (c\to d), \ldots\} from one configuration to another.

(Alternatively, if you understand enough topos theoretic synthetic differential algebra or something along these lines, consider this internal to a suitable topos such that it defines the infinitesimal evolution of our system. But this shall not concern me here.)

For simplicity, let’s assume that there is at most one edge with given source and target configuration in EE.

Let there be a “phase”, (“cost of transition”) associated with each such edge. More precisely, let there be some abelian category CC and a graph map tra:EC. \mathrm{tra} : E \to C \,.

“Graph map” just means: like a functor, but without any condition on composition (since we have no composition in EE). If we pass from EE to the graph category it generates, then tra\mathrm{tra} becomes a functor on that.

Now, let’s quantize. What should that mean? Somehow, we should “sum tra\mathrm{tra} over everything in sight”.

So consider the morphism UU in CC which is defined as follows (think of the “image” of the adjacency matrix of EE under tra\mathrm{tra}):

UU is an endomorphism U: xconftra(x) xconftra(x) U : \oplus_{x \in \mathrm{conf}} \mathrm{tra}(x) \to \oplus_{x \in \mathrm{conf}} \mathrm{tra}(x) of the direct sum of objects in CC that tra\mathrm{tra} assigns to each point in configuration space.

So UU is entirely specified by giving all its components U(x,y):tra(x)tra(y), U(x,y) : \mathrm{tra}(x) \to \mathrm{tra}(y) \,, between the fibers over all points in configuration space. Let’s simply set U(x,y)={0 ifxγy does not exist tra(xγy) otherwise. U(x,y) = \left\{ \array{ 0 & \text{if}\;x \stackrel{\gamma}{\to} y\; \text{ does not exist} \\ \mathrm{tra}(x \stackrel{\gamma}{\to} y) & \text{otherwise} } \right. \,.

So UU is supposed to be something like “the sum of phases over all one-step evolution paths of our system”. Where “sum” is some blend of “direct sum” and something else.

(All the effort of that entry of mine was to realize this as an honest colimit of something.)

Notice now various things:

- This UU is much like the image under tra\mathrm{tra} of what John called the zeta operator, or what Eric Forgy and myself used to call the graph operator. It has a couple of interesting properties.

- This UU is essentially U=U(τ) U = U(\tau) the “quantum propagator” over time τ\tau of our system.

This is maybe better seen by considering its powers U(nτ):=(U) n. U(n\tau) := (U)^n \,. The (x,y)(x,y)-component of this morphism is the morphism between the fiber over xx to that over yy that is the sum of phases over all possible paths of length nn between xx and yy.

Up to a measure on the space of such paths, this is indeed the (discrete) path integral representation of the quantum propagator!

One might imagine that this measure comes in more or less naturally as we take the continuum limit of the above context, somehow.

When I first wrote about this, I found this very simple observation pretty cool. As you recall, I was trying to understand Freed’s philosophy that “taking the path integral is ‘the same’ as taking the space of sections” concretely in simple examples.

So I tried to see if the above UU is the colimit of something nice: because it incorporates the right kind of sum over objects and over morphisms at the same time.

Meanwhile, I feel I have a good understanding of Freed’s prescription at the level of objects #.

But I still feel the need to incorporate the above construction more thoroughly in a general conceptual understanding of “quantization”.

Posted by: urs on February 5, 2007 8:48 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

I should maybe add the simplest example to keep in mind:

let CC simply be the category of complex vector spaces, and let the fiber over each point simply be a copy of the complex numbers: tra(x)=𝒞\mathrm{tra}(x) = \mathcal{C}.

Then U=U(τ)U = U(\tau) is nothing but an n×nn\times n-matrix (n=|conf|n = |\mathrm{conf}| the number of configurations) of complex numbers:

U ij=0U_{ij} = 0 if there is no edge between x ix_i and x jx_j, and U ij=U_{ij}=some phase if that’s the phase associated to moving from x ix_i to x jx_j in one “unit of time”.

Posted by: urs on February 5, 2007 8:55 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

I like this stuff you’re saying, Urs. A bunch of similar things should show up in my course on quantization and cohomology, where I’ll soon try to build a Hilbert space starting from a category XX of ‘states and processes’ and a functor

e iS/:XU(1) e^{iS/\hbar}: X \to \mathrm{U}(1)

assigning a phase to each process.

Urs wrote:

John wrote:

Since when do quantum system come born with a special operator ZZ, which means ‘do all possible processes and add them all up’? They don’t.

They do: the path integral! :-)

My reason for not suggesting that answer was that in the path integral we sum over all processes weighted by phases. We don’t do the path integral starting from just a category XX; we also need the functor

e iS/:XU(1) e^{iS/\hbar}: X \to \mathrm{U}(1)

But, I bet your answer is actually right. Given a functor of the above sort, it extends uniquely to a linear map

f:[X] f: \mathbb{C}[X] \to \mathbb{C}

and this linear map sends the zeta operator Z[X]Z \in \mathbb{C}[X] to a complex number f(Z)f(Z). This number is related to what people call the partition function of the system.

(Amusingly, the partition function is usually called ZZ.)

Furthermore, any category XX comes with a ‘trivial’ choice of e iS/e^{iS/\hbar}, assigning the phase 11 to each process. So, if we don’t have any more intelligent choice of e iS/e^{iS/\hbar} at hand, we can always use that.

I’ve been very lazy about reading what you’ve written about quantization — I’m so busy that when I see a big pile of diagrams involving ‘tra’ and ‘tri’ and ‘pha’ and so on, I tend to skip it and read something easier. But, I suspect we’re chewing on opposite ends of the same bone — and you’re probably chewing a lot faster than me…

Posted by: John Baez on February 7, 2007 1:38 AM | Permalink | Reply to this

powers of zeta and path integrals

this linear map sends the zeta operator Z[X]Z \in \mathbb{C}[X] to a complex number f(Z)f(Z). This number is related to what people call the partition function of the system.

It is interesting how this is related to Tom’s formula for the Euler characteristic.

What is also becoming very interesting, is how we have two slightly different perspectives both on the possible meaning of Tom’s formula as well as on the operator ZZ.

I am apparently confused about something – and I’ll need to clarify that!

The difference in perspective is this:

Assume for a moment that the category XX that we are talking about is the graph category of some graph.

Then

1) The entity that I called UU is essentially the sum of all edges of the graph.

The powers U nU^n of UU consist of a sum over all formal concatenations of nn edges.

2) The entity that you call ZZ is the sum of all morphisms, i.e. of all formal concatenations of edges.

The powers Z nZ^n of ZZ consist again of all possible morphisms, but now weighted by the number of ways that they may be broken up into nn composable morphisms.

Hm…

Both concepts encode almost the same idea, somehow. The nn-th power both of UU as well as of ZZ is like a “path integral” over all paths that “require nn time steps”.

The difference is in what counts as a path that may be traversed in a single time step.

In 1) such a “1-step path” is literally the shortest sort of path available.

In 2) every path may be traversed in one time step.

Notice that 1) seems to make some sense in Lawvere-like reasoning if we think of “single time step” as “infinitesimal time interval” and of “edge of the graph” as “two infinitesimally neighbouring points”.

So taking powers of UU builds up paths from small (infinitesimal) pieces.

On the other hand, taking powers of ZZ takes long macroscopic paths and breaks them up into smaller constituents, in a way. (At least it counts the number of ways to do so.)

Posted by: urs on February 7, 2007 4:34 PM | Permalink | Reply to this

Re: powers of zeta and path integrals

I wrote:

1) The entity that I called UU is essentially the sum of all edges of the graph.

The powers U nU^n of UU consist of a sum over all formal concatenations of nn edges.

This becomes much clearer in pictures:

Notice that, in this example, V xV yV_x \oplus V_y is the space of sections of the vector bundle over the vertices of the graph, i.e. our space of states.

The “universal property” of the propgator is that it is the universal 1-disk phase in that every possible holonomy e 1V xtra(x,y)V ye¯ 2 \mathbb{C} \stackrel{e_1}{\to} V_x \stackrel{\mathrm{tra}(x,y)}{\to} V_y \stackrel{\bar e_2}{\to} \mathbb{C}

over an “infinitesimal 1-disk” (an edge of the graph)

factors through the infinitesimal propagator: it is the “sum of all infinitesimal paths”.

Posted by: urs on February 8, 2007 11:57 AM | Permalink | Reply to this

Re: powers of zeta and path integrals

Hi Urs,

Would it be misguided to relate this to what we did with discrete calculus? If you want to consider a differential algebra with this stuff, then certain “paths” must vanish and there is a maximum “length” path that corresponds to the “dimension” of the graph. Or am I just confused?

I know discrete calculus a lot better than I know n-categories, but I’ve always thought the two had some important relation.

Eric

Posted by: Eric on February 8, 2007 4:27 PM | Permalink | Reply to this

Re: powers of zeta and path integrals

Urs wrote:

The entity that I called UU is essentially the sum of all edges of the graph. The powers U nU^n of UU consist of a sum over all formal concatenations of n edges.

Tim Silverman (see below) writes about how if you set U=D+SU = D + S, with DD the diagonal part of UU, then U=D+S=D(1+D 1S)U = D + S = D(1 + D^{-1} S) so that

(1)U 1 = 11+D 1SS 1 = (1D 1S+(D 1S) 2(D 1S) 3+)S 1 \array{ U^{-1} &=& \frac{1}{1 + D^{-1}S} S^{-1} \\ &=& (1 - D^{-1}S + (D^{-1}S)^2 - (D^{-1}S)^3 + \cdots )S^{-1} }

This is the old quantum field theory trick : one is calculating the connected partition function, or the effective action, or the renormalized thing-a-ma-jig, or something like that, by summing over all Feynman diagrams. You’ll know more about this than me pic.

In general UU is not invertible… but somehow it seems clear that there must be some connection between this ‘sum over paths/Feynman diagram/renormalization’ reasoning, the things you are saying above about propagators, and the Euler characteristic. In fact, I think you’ve already explained this connection… I just haven’t understood it yet!

David Corfield wrote:

Is it surprising that if you take any two categories with the same number of objects and same number of arrows between pairs, but different multiplication between arrows, then the Euler characteristics of their classifying spaces (if well-defined) are equal?

I find it a bit surprising. One possibility that one might imagine is to define χ(C)\chi(C) as the actual Euler characteristic of the nerve of CC. In this way its clear that ‘composition information’ is encoded from the very start. The trouble with this is that the Euler characteristic will not converge in general pic. Perhaps one could create a ‘generating function’

(2)χ(C)(z)= i(1) i|H i(C,)|z n \chi(C)(z) = \sum_i (-1)^i |H^i(C, \mathbb{Z})| z^n

instead?

Anyway, somehow Tom’s approach recovers this nerve version of the Euler characteristic, at least for skeletal categories containing no nontrivial endomorphisms (See Tom’s Prop 2.11), and for many other kinds of categories too I think… like orbifolds and other things. Weird.

Posted by: Bruce Bartlett on February 8, 2007 6:44 PM | Permalink | Reply to this

Re: powers of zeta and path integrals

Bruce says:

Tim Silverman (see below) writes about how […]

somehow it seems clear that there must be some connection […]

This is getting to the point where the suspense is unbearable…

If somebody now comes up with a good way to think about UU as something like a “sum”, “colimit”, “push-forward to a point” in a nice way (improving on my attempt at explaining it as the “universal infinitesimal disk holonomy”) and reducing on objects to what we already know, thus giving Freed’s principle a solid conceptual underpinning…

…if somebody now even does this, I’ll faint.

Posted by: urs on February 8, 2007 7:59 PM | Permalink | Reply to this

Re: powers of zeta and path integrals

I meant to make a little note replying to this, and then forgot, and everyone has moved on. But, just for the sake of completeness:

If one is doing this stuff in the context of partition functions and Feynman diagrams, the usual thing one uses is not the inverse of the zeta function but the exponential of the Laplacian, possibly multiplied by factors of time, transition probability or amplitude rates, etc (depending on whether one is doing QFT or statistical mechanics). That way, one still gets a sum over all paths, but they are weighted correctly, and the Laplacian does sensible things like removing a particle from one vertex before transferring it along an edge to the neighbouring vertex. One can do this either with a first-quantised/probabilistic version or a second-quantised/statistical version (or, I guess, a categorified version with structure types over the vertices and structure type operators over the edges :-) .)

In this setup, the paths are precisely the Feynman diagrams: there are no interactions, so all the vertices in the diagrams are the trivial ones with one incoming leg and one outgoing leg.

The inverse of the zeta function behaves a bit differently, though, because of the strange self-amplifying effect produced by the positive entries along the diagonal. It’s really hard to imagine in terms of physics. (I had chains of leaky buckets floating around in my head, and I never got the intuition to work properly.)

I thought I could maybe think of it in terms of a generalisation of Morse functions, but that doesn’t seem to work either …

Posted by: Tim Silverman on February 22, 2007 9:05 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

But, I suspect we’re chewing on opposite ends of the same bone

I am delighted to see your discussion of quantization by passing from the “category of processes” to its category algebra.

Much of my recent effort was devoted to trying to understand that on general enough grounds, such that, in particular, it admits categorification in a nice way.

So, given a category XX and a functor e iS:XVecte^{i S} : X \to \mathrm{Vect}, I can restrict this functor to objects and then “push it to a point” #.

This produces the space of sections of the vector bundle defined by that functor.

If the space of objects of XX is a finite set and if that functor sends every object to the ground field \mathbb{C}, then this space of sections is indeed nothing but the category algebra of the discrete category on the objects of XX.

So in this special case we recover the construction of the “space of states” as the object part of forming the category algebra, the way you described #.

All my effort here was devoted to seeing if we can also understand the passage to the category algebra for non-identity morphisms as some “pushforward to a point” or something along these lines. That’s how I started talking about that operator UU.

So maybe the question that I am struggling with is:

What is the passage X[X] X \mapsto \mathbb{C}[X] from a category to its category algebra really?

What is the abstract nonsense way to describe this assignment of algebras to categories?

(By the way, since an algebra is a category with a single object: probably the “point” that we are “pushing forward to”, somehow.)

The “right” answer should allow us to do something like forming a “weighted” category algebra, where we also throw in a functor XsomethingX \to \text{something} and let that affect the resulting “category algebra”.

Posted by: urs on February 7, 2007 5:18 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

`Don’t come with a special operator ZZ …’

That’s asking for trouble :D

Posted by: David Roberts on February 7, 2007 1:47 AM | Permalink | Reply to this

deficit angles?

From the end of TWF 244:

This is a bit subtle, and I don’t deeply understand it. But, Leinster proves so many nice theorems about this “Euler characteristic” that it’s clearly the right notion of the size of a category

When we talked about this last time, I had grown somewhat fond of the interpretation mentioned here:

Tom Leinster gives a “local” formula for Euler characteristic, in that he obtains it by adding up quantities that are defined at each vertex.

It is known that, for ordinary manifolds, these local formulas integrate up the curvarture of the manifold at each point: that’s the Gauss-Bonnet theorem.

There is also a version of this for spaces that are not manifolds, but just cell complexes (at least for dimension 2): Here we define at each vertext the deficit angle and sum all these defincit angles up.

I haven’t tried to prove it, but I am willing to bet that Tom Leinster’s “weighting” is secretly a way to talk about the “deficit angle” at a given vertex.

As a consistency check, notice that the weighting at a vertex becomes smaller when more edges coincide at that vertex. That’s what we also expect to be true for a “deficit angle”.

Posted by: urs on February 5, 2007 10:37 AM | Permalink | Reply to this

Re: deficit angles?

Urs wrote:

I haven’t tried to prove it, but I am willing to bet that Tom Leinster’s “weighting” is secretly a way to talk about the “deficit angle” at a given vertex.

(Recall that given a polyhedron and a vertex vv of it, the deficit angle or deficiency at vv is 2π2\pi minus the sum of each angle at vv. For instance, in a cuboid, every vertex has deficit angle π/2\pi/2. The sum of the deficit angles of all the vertices is 2π2\pi times the Euler characteristic.)

Anyway - I’d love it if Urs’s bet were right! I don’t quite know what it means, but I agree that it would be great. Maybe someone else can figure out how to make it true.

Posted by: Tom Leinster on February 5, 2007 3:32 PM | Permalink | Reply to this

Re: deficit angles?

I don’t quite know what it means,…

Right, it’s not only that I did not try to prove something here, I did not even try to make something precise enough for it to admit anything like a proof! :-)

But maybe we can figure it out together.

I think I was imagining something like: assume that we take any two morphisms with the same target as “having the same angle enclosed between them”.

Then deficit angles translate into numbers coincident morphisms, which is, in turn, what determines those “weightings”.

One aspect that makes this a little subtle is that you are talking about categories where most of the literature on deficit angles and the likes talks about graphs.

For that reason it might be very helpful to better understand what your construction implies in the special case that the category in question is that generated by a graph.

Posted by: urs on February 5, 2007 4:13 PM | Permalink | Reply to this

Re: deficit angles?

Urs wrote:

I haven’t tried to prove it, but I am willing to bet that Tom Leinster’s ‘weighting’ is secretly a way to talk about the ‘deficit angle’ at a given vertex.

You probably know this… but if your guess is right, I think this simple form of it should only be true when the classifying space BXB X of our category XX is homotopy equivalent to a graph.

In general, the classifying space BXB X is a simplicial set built from vertices, edges, triangles, tetrahedra and so on. (I sketched how this worked in week244, but I said a lot more in items J and K of week117.) And, Tom’s Euler characteristic of the category XX matches the usual Euler characteristic of this space:

χ(X)=χ(BX)\chi(X) = \chi(B X)

in cases where the right-hand side is well-defined.

So, even for very simple categories, Tom’s Euler characteristic detects higher-dimensional phenomena, not easily visible from thinking just about vertices and edges.

This is even true if our category is a quiver — a category freely generated by a graph.

Posted by: John Baez on February 7, 2007 2:09 AM | Permalink | Reply to this

Re: deficit angles?

Is it surprising that if you take any two categories with the same number of objects and same number of arrows between pairs, but different multiplication between arrows, then the Euler characteristics of their classifying spaces (if well-defined) are equal?

Is it even obvious for the Klein 4-group and the cyclic 4-group that these Euler characteristics should be the same?

Posted by: David Corfield on February 8, 2007 3:32 PM | Permalink | Reply to this

Re: deficit angles?

David wrote:

Is it even obvious for the Klein 4-group and the cyclic 4-group that these Euler characteristics should be the same?

“Obvious” might be going a bit far, but I don’t feel surprised about it. One way to think about the Euler characteristic of a group is via its actions: if GG acts freely on a set SS then we should have χ(S/G)=χ(S)χ(G). \chi(S/G) = \chi(S) \chi(G). The left-hand side is the cardinality of a finite set, only depending on the cardinality of SS and the order of GG. So χ(G)\chi(G) must only depend on the order of GG.

I view SS as a piece of paper and S/GS/G as SS folded up and stuck to itself in some way. For a free action, it’s a rather simple kind of folding and there’s a uniform number of layers - namely, the number of elements of GG. Different groups GG of the same order give the same number of layers but a different pattern of folding.

Posted by: Tom Leinster on February 8, 2007 8:28 PM | Permalink | Reply to this

Re: deficit angles?

I was thinking in terms of nerves, but maybe you could extend your story from groups to categories. Can we imagine a piece of paper SS on which a category ‘acts’?

Posted by: David Corfield on February 9, 2007 8:46 AM | Permalink | Reply to this

Re: deficit angles?

David wrote:

Is it even obvious for the Klein 4-group and the cyclic 4-group that these Euler characteristics should be the same?

Once you know that Tom’s Euler characteristic of a category reduces to my ‘groupoid cardinality’ in the case of groupoids, I think there’s a very nice story to tell about why a groupoid containing one object with 4-fold symmetry should have cardinality 1/4.

For example, we need this to get

S//G=|S|/|G| S//G = |S|/|G|

for a finite group GG acting on a finite set SS, where S//GS//G is the ‘weak quotient’, as explained here.

However, you know all this, so maybe you forgot that Tom’s Euler characteristic equals my groupoid cardinality in the case of groupoids — or maybe you’re wondering if it’s obvious that they’re equal.

Posted by: John Baez on February 10, 2007 3:08 AM | Permalink | Reply to this

Re: deficit angles?

I meant is their equality obvious via the classifying space construction? If we were to build up their classifying spaces according to the recipe you gave us in J and K of TWF 117, via their nerves, we’d have two rather different looking spaces. But do these spaces share some feature which lets us know they came from groups of the same order?

Posted by: David Corfield on February 10, 2007 9:12 AM | Permalink | Reply to this

Euler characteristic vs. homotopy cardinality

John wrote:

Tom’s Euler characteristic equals my groupoid cardinality in the case of groupoids — or maybe you’re wondering if it’s obvious that they’re equal.

David replied:

I meant is their equality obvious via the classifying space construction? If we were to build up their classifying spaces according to the recipe you gave us in J and K of TWF 117, via their nerves, we’d have two rather different looking spaces.

I assume the ‘they’ you’re talking about is completely different from mine. You must be talking about the classifying spaces of /4\mathbb{Z}/4 and /2×/2\mathbb{Z}/2 \times \mathbb{Z}/2, and wondering how to tell that they have the same homotopy cardinality.

… do these spaces share some feature which lets us know they came from groups of the same order?

Yeah, but you’re not going to like what it is: it’s that their fundamental groups are the same size! That’s really all there is to it.

Remember, the classifying space BG=K(G,1)BG = K(G,1) of a discrete group is uniquely determined by the fact that:

  1. it’s connected,
  2. its fundamental group is GG,
  3. all its higher homotopy groups vanish.

So, if you hand me a space XX and tell me it’s a classifying space of a finite group, I instantly know that its the classifying space of

G=π 1(X),G = \pi_1(X),

and I instantly know that the homotopy cardinality of XX is

|X|=1/|G|. |X| = 1/|G| .

Alternatively, suppose you give me X=K(G,1)X = K(G,1) with its standard simplicial structure (as defined using the nerve) of GG. Then I can try to compute its Euler characteristic by working out the number of 0-simplices, minus the number of nondegenerate 1-simplices, plus the number of nondegenerate 2-simplices, etcetera. I’ll get a divergent sum — but if I use Abel summation to tame it, I again get the same answer:

1/|G|.1/|G| .

Similar stuff holds for the classifying space of a groupoid; this is just a disjoint union of classifying spaces of groups.

But the classifying space of a general category is vastly more interesting and tricky: we can get any space this way (up to weak homotopy equivalence), so the relation between homotopy cardinality and Euler characteristic becomes more mysterious. Tom’s new results help a lot, but there are still open questions.

Posted by: John Baez on February 11, 2007 1:32 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

the classifying space of a general category is vastly more interesting and tricky

I guess what I’ve been wondering about is what you can tell when given a space. Can you tell whether it’s the classifying space of a group? (Yes, if you can show the only homotopy is in dimension 1. But if restricted to the homology?)

More generally, can you tell which category’s classifying space is weakly homotopy equivalent to your space? From week 117 K, yes if you are given the space as a simplicial set. Ah, but there you work things in terms of a poset. So you’re saying that every space that’s the geometric realization of some simplicial set is homeomorphic to the classifying space of some poset?

So when do two categories have homeomorphic classifying spaces?

Posted by: David Corfield on February 12, 2007 9:31 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

David wrote:

I guess what I’ve been wondering about is what you can tell when given a space. Can you tell whether it’s the classifying space of a group? (Yes, if you can show the only homotopy is in dimension 1. But if restricted to the homology?)

I’ll start with some stuff you know, just so the kids out there will learn something today:

The homology of the classifying space BGB G of a discrete group GG is usually called the homology of GG. You can take this as a definition:

H n(G)=H n(BG) H_n(G) = H_n(B G)

though historically this was the discovery that made Eilenberg and Mac Lane famous! That’s why BGB G is called the Eilenberg–Mac Lane space K(G,1)K(G,1) when GG is discrete.

So, you’re giving me this challenge. You’re going to tell me all the homology groups of some space, and I’ve got to guess if there’s a group having these homology groups.

Uhh…

I don’t know enough about the homology of groups to do very well at this game!

I know that the homology groups H n(G)H_n(G) of a finite abelian group GG are always periodic as a function of nn. In fact, on a good day (not today) I can even tell you what they are.

But, I don’t know many general results about the homology of other finite groups, much less infinite ones.

In fact, if you really challenged me to this game, I’d cheat by peeking in here:

  • David Benson, Representations and Cohomology, Vol. II: Cohomology of groups and modules, Cambridge U. Press, 2004.

In fact, given how much I love the conceptual importance of group cohomology for understanding Postnikov towers, it’s really pathetic how little I know concretely about the cohomology of everybody’s favorite finite groups!

If I could live forever, or make arbitrarily many copies of myself, I would certainly want to spend an infinite amount of time learning about the cohomology of groups. It must be an incredibly beautiful subject.

Oh, by the way — you’ll note how I switched from homology to cohomology? It makes no difference as far as Euler characteristic is concerned, but cohomology is nicer in some ways, because the cohomology groups H n(G)H^n(G) can be assembled to form a graded ring:

H (G)= n=1 H n(G).H^\bullet(G) = \oplus_{n = 1}^\infty H^n(G).

There’s a lot more information in this graded ring than the individual groups — or so they say.

By the way, I just noticed that the periodicity result I mentioned may be related to how you can resum the divergent Euler characteristic and get the homotopy cardinality. If the Betti numbers

b n=rank(freepartofH n(X)) b_n = rank (free part of H_n(X))

are periodic, then the generating function

n=0 b nx n \sum_{n = 0}^\infty b_n x^n

is rational, which makes it a lot easier to study the Abel–summed Euler characteristic

χ Abel(X)=lim x1 n=0 b nx n. \chi_{Abel}(X) = \lim_{x \uparrow 1} \sum_{n = 0}^\infty b_n x^n.

In fact this is so cool that I’m wondering if the homology groups of any finite group are periodic.

I’m also confused, though, because I’m thinking the free part of H n(G)H_n(G) for GG finite should often be trivial except for n=0n = 0. For example, when G=/2G = \mathbb{Z}/2.

In fact, now I recall getting really confused about this with the help of Dan Christensen, who eventually convinced me that to compute the homotopy cardinality of BGB G in terms of homology groups of GG we need to use homology with coefficients in /q\mathbb{Z}/q\mathbb{Z} for all prime powers qq, or something like that.

Oh well. Though I didn’t really answer your question, at least it made me think.

So you’re saying that every space that’s the geometric realization of some simplicial set is homeomorphic to the classifying space of some poset?

That sounds right.

Before, I had been cautiously restricting myself by saying simplicial complex where you say simplicial set: a simplicial complex is a simplicial set where there’s at most one nn-simplex with a given set of faces (for n1n \geq 1).

However, I can”t see why that would be essential here. So, I think you’re right.

Also:

Note that every ‘nice’ space (CW complex) is homotopy equivalent to the geometric realization of some simplicial set. So, people usually state this consequence of what you said: every nice space is homotopy equivalent to the classifying space of some poset.

Or, if you’re willing to settle for weak homotopy equivalence, you can drop the restriction to nice spaces, and say: every space is weakly homotopy equivalent to the classifying space of some poset.

But, for an impressive collection of spaces you can actually get a homeomorphism — using what you said. For example, all smooth manifolds. Every smooth manifold is homeomorphic to the classifying space of some poset.

And if your smooth manifold is compact, you can chop it up into finitely many simplices. So, every compact smooth manifold is homeomorphic to the classifying space of some finite poset.

The physicist Rafael Sorkin used these facts to argue that manifolds are irrelevant to quantum gravity: we only need posets. This is of course a very radical stance

Posted by: John Baez on February 13, 2007 12:26 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

So, every compact smooth manifold is homeomorphic to the classifying space of some finite poset.

The physicist Rafael Sorkin used these facts to argue that manifolds are irrelevant to quantum gravity: we only need posets. This is of course a very radical stance

I have been thinking along these lines recently in the context of that “nn-particle” business #: let’s think of the “shape and inner structure” of the open string, say, not as the interval [0,1][0,1], but as the poset {ab}\{a \to b\}.

Notice how Simon Willerton’s idea is of this kind, and very useful.

In essence, Simon notices that instead of looking at the loop group in the world of spaces Hom(S 1,G) \mathrm{Hom}(S^1,G) we should be looking at it in the world of categories: we replace S 1S^1 by its fundamental groupoid and GG by GG regarded as a 1-object category and consider the functor category Hom(Σ(),Σ(G)). \mathrm{Hom}( \Sigma(\mathbb{Z}),\Sigma(G)) \,.

Taking classifying spaces, we get |Hom(Σ(),Σ(G))|Hom(|Σ()|,|Σ(G)|)=Hom(S 1,BG)G. |\mathrm{Hom}( \Sigma(\mathbb{Z}),\Sigma(G)) | \simeq \mathrm{Hom}( |\Sigma(\mathbb{Z})|,|\Sigma(G)|) = \mathrm{Hom}(S^1, B G) \simeq G \,. This means that Hom(Σ(),Σ(G)) \mathrm{Hom}( \Sigma(\mathbb{Z}),\Sigma(G)) behaves like loops in GG! (And even for finite groups, where loops in GG does not even make sense.)

That this point of view is highly useful is evidenced by the fact that adopting it in the above example makes the Freed-Hopkins-Teleman theorem essentially a triviality.

I did look at some of Sorkin’s reviews long time ago. Does he ever speak of fields on spacetime as functors on his posets?

Posted by: urs on February 13, 2007 12:42 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

This stuff is too fascinating. Too bad I have a day job!

Sorkin’s stuff is awesome. He was a big influence on the way I think about things. My first physics “mentor” was an old classmate of Sorkin’s by the name of Terry Honan. Terry’s phd thesis (under Charles Misner) is still one of the most beautiful papers I have ever read in my life

Geometry of Lattice Field Theory

Too bad he wrote it BT, i.e. Before TeX. One of these years, I may even TeXify it or pay someone to TeXify it for him as a gift (which would also be a gift to the world).

Posted by: Eric on February 13, 2007 7:15 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Oh look! A discussion where I can even try to claim expertise, and not only familiarity! Wow!

So … cohomology of groups. As John pointed out, it’s much more fun to work with the cohomology, since the homology ends up being a graded cocommutative coalgebra, whereas the cohomology is a graded commutative algebra. And we know a LOT more about what to do with algebras than with coalgebras.

The next useful observation is that to a certain extent, it’s easier to work with infinite groups than to work with finite groups. And easier with abelian than nonabelian. So of course, my own field is nonabelian finite groups with coefficients in a finite field. ;)

Since, though, however, I don’t think about infinite groups very much, I won’t have much to say about them. For finite groups, though, you’d be hard put to stop me babbling about them.

I have, on my blog (featured here), as a result of my reading up on the theory of cohomology of finite groups, written up a bunch of explicit calculation of cohomology rings. In general, a bunch of things can be said about these rings: The cohomology of a finite group forms a finitely generated, graded commutative, noetherian algebra over the ground field (or \mathbb{Z}) we choose to work over. It has one generator that’s not involved in any relations, corresponding precisely to the Bott periodicity.

For a first few examples, the cyclic n-group has, depending on ground field, either cohomology concentrated to degree 0 and of rank 1 (finite field with characteristic not dividing n), or rank 1 in every degree (finite field with characteristic dividing n) – giving rise to different ring structures for characteristic 2 and for all other characteristics, or [x]/(nx)\mathbb{Z}[x]/(nx) for coefficients in \mathbb{Z}.

For the next few results, I’ll stick to coefficients in the field of characteristic 2:

D 8D_8 has cohomology ring k[x,y,z]/(xy)k[x,y,z]/(xy) with x,y in degree 1 and z in degree 2. Bott periodicity 2.

Q 8Q_8 - the group of units in the quaternions - has cohomology ring k[x,y,z]/(x 2+xy+y 2,x 2y+xy 2)k[x,y,z]/(x^2+xy+y^2,x^2y+xy^2) with x,y in degree 1 and z in degree 4.

If you have a 2-group you want to figure out the cohomology of, Jon F. Carlson has the cohomology of all 2-groups up to the ones of order 64 published on his webpage and in book form.

If those aren’t enough (i.e. your group is bigger), then David J. Green is building a database too. He has some 128- and 256-groups, though far from all. He also has a bunch of 3-, 5-, 7- and 13-groups up.

Well … I think that’ll do for a first infodump. Please poke me if you want more. :)

Posted by: Mikael Johansson on February 14, 2007 1:08 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

So can you see how some Eulerian trickery (or Abel-summation) might allow you to calculate a characteristic as John suggests? That Bott periodicity you mention looks hopeful. We’re after calculations like:

1+0.x1.x 2+0.x 3+1.x 4+...=1/(1+x 2)1 + 0.x - 1.x^2 + 0.x^3 + 1.x^4 +... = 1/(1 + x^2)

Therefore,

1+01+0+1+...=1/21 + 0 - 1 + 0 + 1 +... = 1/2.

From what you say about cyclic groups, John’s comment may well be right:

…to compute the homotopy cardinality of BGBG in terms of homology groups of GG we need to use homology with coefficients in /q\mathbb{Z}/ q \mathbb{Z} for all prime powers qq, or something like that.

Posted by: David Corfield on February 15, 2007 4:14 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

By the way, given that above we were playing around with various analogies between issues arising in computing Euler characteristics (of categories) and quantum theory, it is rather remarkable that resummation techniques play such an important role on the Euler characteristic side – because the same techniques are of course the hallmark of perturbative quantum field theory!

John talks about that in great detail for instance here.

Posted by: urs on February 15, 2007 4:21 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

In that discussion, John wrote:

Exercise: show that 1 - 2 + 3 - 4 + … is not Cesaro summable but that it is (C,2) summable, and show that its (C,2) sum is 1/4.

Now, David Green tells us that the cohomology ring of /4\mathbb{Z}/4\mathbb{Z} has 2 generators, yy in degree 1 and xx in degree 2, and that there is one minimal relation: y 2=0y^2 = 0.

So in degree nn how many ways are there to arrange aa copies of xx and bb copies of yy, such that 2a+b=n2a + b = n, and no two yys are adjacent.

A rapid calculation suggested we might get a sum like at the beginning of this comment, but a further check suggests otherwise. It can’t be beyond the wit of man to find a general formula. Also in the case of the Klein 4-group, where there are generators of degree 1 and degree 2 and no relations.

Posted by: David Corfield on February 15, 2007 5:24 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Please note that the cohomology ring is (graded) commutative. So, it’s not a question of ordering x and y such that no two y are adjacent: it’s a question of not more than two y’s occuring at all.

In the case of C 4C_4, all that is saying is giving a ring structure to the fact that we have a rank 1 cohomology group in each degree. So, if you want the Euler characteristic of the corresponding chain complex (which should, in some manner of speaking, be the same as the Euler characteristic of BC 4BC_4), you actually want to calculate precisely the sum 1-1+1-1+1-1+1-1+1…. since each degree has precisely rank 1.

Do we, maybe, want to involve the Poincaré-Betti-series here?

This holds in surprisingly many of the cyclic group; in order to tell them apart, the cohomology isn’t enough: you need the Massey products/A A_\infty-structure to do that.

Posted by: Mikael Johansson on February 15, 2007 7:06 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

And to think I have the means to remove my comment permanently. But I shall leave it up as a monument to how eagerness to find a result can make one forget basics.

I’m still intrigued though as to how the right cardinality can appear.

Posted by: David Corfield on February 16, 2007 7:57 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Well, there’s another fun fact. In general, the cohomology, and indeed the entire representation theory, is the same (and rather simple) for all primes that don’t divide the group order as for \mathbb{Z}. Thus, if we know the cohomology with coefficients in \mathbb{Z} and with coefficients in each /p\mathbb{Z}/p\mathbb{Z}, for p||G|p\left||G|\right., we know the cohomology for every possible field. The cohomology doesn’t change for field extensions either; though other properties of the cohomology ring might.

Posted by: Mikael Johansson on February 15, 2007 7:09 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

So categories with homeomorphic classifying spaces can look quite different. E.g., RP RP^{\infty} is the classifying space of Z/2Z/2, but is also the classifying space of a poset constructed from its simplicial structure. Now could one find a ‘weighting’ on this poset (making allowances for it not having finitely many objects), so that the weights sum to 1/21/2? Hmm, what do we know about triangulating RP RP^{\infty}?

If multiple incidence between simplices were allowed, so that each simplex occurred twice in any simplex of higher dimension, I could convince myself that the weighting would alternate between +1+1 and 1-1 along the objects, which correspond to simplices of all dimensions.

Posted by: David Corfield on February 13, 2007 9:54 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Oh, has this got something to do with projective resolutions?

Ah, I see that there’s also a category cohomology.

Posted by: David Corfield on February 13, 2007 10:41 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

If people have been working on the cohomology of categories for at least 30 years, and perhaps 40, surely they must have wondered about the Euler characteristic.

Posted by: David Corfield on February 14, 2007 9:59 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

David Corfield wrote:

Oh, has this got something to do with projective resolutions?

Sure! Usually people try very hard to make the cohomology of groups sound mysterious. They say: to define the cohomology

H n(G,A)H^n(G,A)

of a group GG with coefficients in AA, first take a projective resolution of the trivial GG-module. Then hom it into AA, getting a cochain complex. Then take the cohomology of that.

The less mysterious way goes like this: take your group GG and turn it into a space BGB G — its classifying space. Then take the cohomology of that space with coefficients in AA.

If you stare hard at these two recipes you’ll see they’re exactly the same if we use the most obvious resolution of the trivial GG-module: the so-called ‘bar resolution’.

The reason I like the second way better is that it’s conceptually simpler. To understand it, you just need to realize that groups are secretly the same thing as connected spaces with vanishing higher homotopy groups. The group GG is secretly the same thing as the space BGB G, which is the connected space with vanishing higher homotopy groups and π 1(BG)=G\pi_1(B G) = G.

Once you understand that a group GG is secretly a space BGB G, nothing could be more obvious than wanting to take the cohomology of that space — and this is what we call H n(G,A)H^n(G,A)!

It takes a more sophisticated mind to understand why we’d want to do all that stuff involving projective resolutions! To really understand the point of that, you need to grok ‘cofibrant replacement’. Most kids seem to accept it as just some complicated recipe.

Posted by: John Baez on February 14, 2007 9:55 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

I’m not entirely convinced that looking at it as being a space is necessarily an easier way. I don’t find it easier to work from that standpoint in any case. :)

Sure, that’s what makes you go “Ooooh, let’s get topological with the group”; but it isn’t a large leap to do that from the direction I came from either.

The way I learned things, and thus also to a certain extent, the way I still view things is that for any sufficiently neat algebraic structure, there’s an obvious way to make a new algebraic structure that looks almost the same, only this time is a free object.

And we sure do like free objects.

One canonical way to show that this works is the bar resolution. The next nice thing you can do is to show that all the ways you could do it in are in some sense the same, and that there is actually a welldefined “cheapest” way to do it as well.

Now, the way you get these free objects, they look a lot like what we got when we fiddled with topology, so maybe translating topology into algebra is something we can do with algebra as well - getting a translation from algebra into algebra.

And it works, and even ties in with other things all over the place.

Now, I know that this isn’t a very good ordering from a historical point of view, and I also know that the people who benefit the most from viewing things this way around tend to be the really ingrained pure algebraists. However, I hadn’t studied a year at university before I was counting myself to the hardcore algebraists, and so find this a very neat way to reason about it.

And then the cohomology of covering spaces ends up being reasonably easy to deal with, because we get just yet another free (projective - if you must :) resolution to fiddle with. The same, in some manner, holds to my mind for Tate, de Rham, Hochschild, Lie, operad, PROP, et.c.

(which in no way would be to say that I do not appreciate the subtleties involved in all of these…)

I would take some issue with your way to describe the algebraists group cohomology though. It’s only Ext kG(k,k)Ext_{kG}(k,k) anyway.

And that bit with homing into A is something you hide in saying “Then take the cohomology of that space with coefficients in A”. So, basically, you’re slightly unfairly blowing up a process that’s almost equivalent, by just using more words for it.

Even worse: in order to get hold of the cohomology of a space, you’ll want to figure out a way to get a chain (or cochain) complex out of that anyway.

Posted by: Mikael Johansson on February 14, 2007 10:15 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Mikael Johansson wrote to John Baez:

Even worse: in order to get hold of the cohomology of a space, you’ll want to figure out a way to get a chain (or cochain) complex out of that anyway.

Yeah! After all, spaces are really just Kan complexes. You geometers are always making things so complicated, talking about open sets and all that business. ^_^

Posted by: Toby Bartels on February 14, 2007 10:32 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Toby wrote:

Yeah! After all, spaces are really just Kan complexes.

Of course — what else could I have meant by ‘space’ in this context?

My point was just that taking the cohomology of a space is something any fool can get interested in: it’s a way of counting the holes in that space. It’s quite astounding when you first realize that a group is a special sort of space. But given that, it’s a no-brainer that counting the holes in this space would be fun to try — and that’s what group cohomology does.

By comparison, one has to be considerably more sophisticated to really understand why replacing an algebraic gadget with its projective resolution — or more generally, its cofibrant replacement — is an important thing to do before studying its maps to other objects.

The story goes like this:

Weak maps, where everything is preserved up to equivalence, are the morally correct things to study. But strict maps, where everything is preserved on the nose, are much easier. Luckily, weak maps out of an algebraic gadget are often the same as strict maps from its ‘cofibrant replacement’: a puffed-up version where all the equations have been replaced by isomorphisms, repeatedly, ad infinitum!

So, we develop the technology of cofibrant replacement (or in the abelian context, ‘projective resolution’) to reduce the study of weak maps to the study of strict maps… and this forces us to invent model categories (or in the abelian context, ‘derived categories’).

All this is wonderful stuff to know, and great for computations. I’m not knocking it.

But, I like getting people interested in math: for me, that’s a crucial part of doing math. If I can’t get lots of people interested in what I’m doing, I have trouble staying interested in it. And for me, the easiest way to get people interested in group cohomology is by showing them that groups are spaces, and therefore they have holes, which deserve to be counted.

Posted by: John Baez on February 16, 2007 6:58 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

John wrote in part:

But, I like getting people interested in math: for me, that’s a crucial part of doing math.

This is probably the point. For a lot of people (including physicists and other nonmathematicians), geometric imagery (like holes in spaces) is a good way to motivate ideas. Whereas for others (especially hard-core algebraists, and many people into combinatorics, formal logic, and computer science), this is not very interesting after all.

And behind all the arguments about the nature of mathematics, and all the talk about dualities between them, isn’t this the real difference between algebra and geometry? What gets us interested?

Posted by: Toby Bartels on February 16, 2007 11:25 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Peter Webb has a very good introduction to the cohomology of categories.

Posted by: David Corfield on February 13, 2007 10:59 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

David wrote:

P \mathbb{R}P^\infty is the classifying space of /2\mathbb{Z}/2, but is also the classifying space of a poset constructed from its simplicial structure. Now could one find a ‘weighting’ on this poset (making allowances for it not having finitely many objects), so that the weights sum to 1/2? Hmm, what do we know about triangulating P \mathbb{R}P^\infty?

Well, you’ve touched on a point where my reportage of Tom’s paper was a bit, umm, exaggerated. I said that a basic principle was: whenever two categories have the same classifying space, they have the same Leinster–Euler characteristic.

What Tom actually shows is something weaker: whenever AA is a finite skeletal category with no endomorphisms except identities, the Euler characteristic of the classifying space of AA equals the Leinster–Euler characteristic of AA.

So, the principle, while certainly ‘morally correct’, has only been demonstrated for finite skeletal categories with no endomorphisms except identities.

This doesn’t cover the case A=/2A = \mathbb{Z}/2, nor does it cover the case where AA' is the face poset of the standard triangulation of P =BA\mathbb{R}P^\infty = B A. AA fails the ‘no endomorphisms except identities’ clause, while AA' fails the finiteness clause.

This is to be expected, since in the normal way of thinking, P 2\mathbb{R}P^2 is ‘too big’ to have a well-defined Euler characteristic.

So, we’ll need to be sneaky.

Hmm, what do we know about triangulating P \mathbb{R}P^\infty?

Well, the standard triangulation comes from the fact that P \mathbb{R}P^\infty is the classifying space of /2\mathbb{Z}/2. In this triangulation, nn-simplices are length-nn composable strings of morphisms in the category /2\mathbb{Z}/2. But this is just a fancy name for length-nn bit strings, like 010100010010100010 for n=9n = 9.

Here’s a picture of one for n=2n = 2:

                            o
                           / \
                        1 /   \ 0
                         /     \
                        o-------o 
                            1

This corresponds to the bit string 1010. Note the bits in our string label the ‘backbone’ of the nn-simplex, while the other edges automatically get labelled with bits due to the requirement that the bits around each triangle sum to zero. (I’m using the fact that in /2\mathbb{Z}/2, addition is the same as subtraction, so one can ignore pesky signs.)

By the way: there’s a wonderful relation between P \mathbb{R}P^\infty and Boolean logic, which we are just barely touching on here: if we think of {0,1}/2\{0,1\} \in \mathbb{Z}/2 as truth values, certain logical operations on truth values turn into operations on points of P \mathbb{R}P^\infty.

Anyway, there are 2 n2^n nn-simplices in the standard triangulation of P \mathbb{R}P^\infty.

You can have fun working out the n+1n+1 top-dimensional faces of a given nn-simplex, in terms of bit strings. You see above that the three faces of 0101 are 00, 11 and 11. What are the faces of 0110101101? What’s the rule?

You can also have fun working out the ‘degeneracies’ — the ways an nn-simplex can give rise to a degenerate (n+1)(n+1)-simplex — but these are very simple: we get them by tossing in extra 00’s into our bit string. So, for example, the 2-simplex 1111 gives rise to three degenerate 3-simplices: 011011, 101101 and 110110.

I hope I have that right!

If so, we see that a nondegenerate nn-simplex is just a string of bits with no zeros in it. So, there’s just one nondegenerate nn-simplex for each nn.

This is why, when we try to compute the Euler characteristic of P \mathbb{R}P^\infty by counting the number of nondegenerate nn-simplices for each nn and forming the alternating sum of the results, we get

11+11+1 - 1 + 1 - 1 + \cdots

which optimists declare equal to 1/21/2.

Now, folks, try pondering the poset of faces of this triangulation of P \mathbb{R}P^\infty!

Posted by: John Baez on February 14, 2007 1:17 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

John wrote:

you’ve touched on a point where my reportage of Tom’s paper was a bit, umm, exaggerated. I said that a basic principle was: whenever two categories have the same classifying space, they have the same Leinster–Euler characteristic

and then pointed out that actually, all I proved was that χ(A)=χ(BA)\chi(A) = \chi(B A) whenever AA is a finite skeletal category in which the only endomorphisms are identities.

I have various comments about this, some of which will go in another post.

First, let me explain the point of the condition on AA. It’s easy to see (and shown in my paper) that this condition on AA is equivalent to saying that the nerve N(A)N(A) of AA has only finitely many nondegenerate simplices. And that’s a very nice condition for a simplicial set to satisfy: for then the Euler characteristic of its geometric realization is simply the alternating sum of the number of nondegenerate simplices in each dimension. Here, the geometric realization is |N(A)|=BA|N(A)| = B A.

Second, the statement

whenever two categories have the same classifying space, they have the same Euler characteristic

is a more-or-less precise conjecture. To make it precise, you have to decide which categories you’re allowing and what “the same classifying space” means. Let’s say we allow all categories that have Euler characteristic in my sense, and that “the same classifying space” means “homeomorphic classifying spaces”, where the classifying space of a category is the nerve of its geometric realization.

(We could be more ambitious and say “homotopy equivalent classifying spaces”.)

I tried to prove this, but couldn’t. I quickly realized that if it’s true, we have something neat: an extended definition of topological Euler characteristic. For the purposes of this post, call a topological space “Eulerian” if it is homeomorphic to BAB A for some category AA that has Euler characteristic in my sense. If the conjecture is true, we can (consistently) define the Euler characteristic of any Eulerian space to be the Euler characteristic of any category that it classifies.

This is reminiscent of Wall’s paper Rational Euler characteristic, where he gives a consistent definition of the Euler characteristic of any group belonging to a certain largeish class (precisely: any group GG for which there exists a finite-index subgroup HH admitting a finite complex as a classifying space).

Posted by: Tom Leinster on February 20, 2007 5:55 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

You guys stimulated me into thinking about all this over the weekend (while young love sang out its name in South California), but I’m only just getting round to replying.

David wrote:

E.g., RP RP^\infty is the classifying space of Z/2\mathbf{Z}/2, but is also the classifying space of a poset constructed from its simplicial structure. Now could one find a ‘weighting’ on this poset (making allowances for it not having finitely many objects), so that the weights sum to 1/2?

There’s a story that can be told for any (triangulable) manifold. Take a triangulation of it. The simplices involved form a poset PP. This poset may be infinite, but the ‘downset’ p:={xP:xp}\downarrow p := \{ x \in P: x \leq p\} of each pPp \in P is finite. Indeed, p\downarrow p is isomorphic to the powerset of {0,,d}\{0, \ldots, d\}, if pp is a dd-simplex.

Now, any finite poset has a unique weighting and a unique coweighting. Similarly, any poset that’s downwards finite in the sense above has a unique coweighting (although ‘weighting’ doesn’t make sense). In the case of a triangulation, the coweight of a dd-simplex pp is (1) d(-1)^d.

So the Euler characteristic of the poset ought to be the (ill-defined) sum of the coweights, which is the alternating sum of the number of simplices in the triangulation of each dimension, as usual. In the special case of RP RP^\infty, this is 11+11+ 1 - 1 + 1 - 1 + \cdots which by the usual hocus-pocus is 1/21/2.

Posted by: Tom Leinster on February 20, 2007 3:39 PM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

David wrote:

I’m still intrigued though as to how the right cardinality can appear.

You’re talking about getting 1/|G|1/|G| as the Euler characteristic of BGB G for a finite group GG, by cleverly summing a divergent series?

I guess you know this, but I’ll repeat it for those who don’t…

The hard problem is to get 1/|G|1/|G| by summing some divergent series computed from cohomology groups of BGB G. As I already already sort of knew, it seems hopeless if we only use traditional Betti numbers, namely

b n=rank(freepartofH n(X,))b_n = rank(free part of H_n(X,\mathbb{Z}))

Mikael explained why: for any cyclic group /k\mathbb{Z}/k we have b 0=1b_0 = 1 and b n=0b_n = 0 for n1n \ge 1.

Luckily, there’s an easy way to get 1/|G|1/|G| by Abel-summing a divergent series: just take the alternating sum

a 0a 1+a 2a 3+a_0 - a_1 + a_2 - a_3 + \cdots

where a na_n is the number of nondegenerate nn-simplices in the standard triangulation of BGB G. This always works — see page 17 here.

For simplicial sets with finitely many nondegenerate simplices, either approach gives the Euler characteristic: we can use the alternating sum of the Betti numbers, or the alternating sum of the numbers of nondegenerate nn-simplices.

For BGB G, only the second approach seems to work. The first one may work if we use some sneaky extra stuff, like cohomology with different coefficient groups, but the second one is easy.

And, the second approach is more like what Euler originally did! ‘Vertices minus edges plus faces’. So, maybe he was smart.

Posted by: John Baez on February 17, 2007 12:07 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Well I was hoping for sneaky tricks according to the first approach. But really all this started with the hope that understanding stuff about groups, their classifying spaces and characteristics would help illuminate Tom’s Euler characteristic of a category. So can we extend your easy approach two to finite categories? It’s clearly not quite as simple as powers of |G|1|G| - 1, but may still be feasible.

Posted by: David Corfield on February 17, 2007 12:29 AM | Permalink | Reply to this

Re: Euler characteristic vs. homotopy cardinality

Oh, to find the number of nondegenerate nn-simplices, it’s something like take the matrix of arrow numbers, subtract the identity, raise to the power nn, and sum the entries. Then somehow an alternating sum does the trick.

Oh, the sum is the same as adding up the entries of the matrix:

1(A1)+(A1) 2...1 - (A - 1) + (A - 1)^2 - ...

which is A 1A^{-1}. So that’s Tom’s result.

Posted by: David Corfield on February 17, 2007 12:39 AM | Permalink | Reply to this

Re: deficit angles?

Urs wrote:

Tom Leinster gives a “local” formula for Euler characteristic, in that he obtains it by adding up quantities that are defined at each vertex.

I don’t get why Tom’s formula is local. Perhaps I’m missing something!

It seems genuinely non-local to me. Let CC be the category, and AA its hom-set matrix,

(1)A xy=|xy|. A_{xy} = |x \rightarrow y|.

So A xyA_{xy} counts the number of arrows from xx to yy. If AA is invertible, then Tom’s formula says that the Euler characteristic of CC is

(2)χ(C)= x,yA xy 1 \chi (C) = \sum_{x,y} A^{-1}_{xy}

Since A 1A^{-1} is a non-local function of AA (remember the standard formula), χ(X)\chi(X) is certainly not manifestly a local invariant.

(Aside : If AA is not invertible, one looks for solutions ψ,ψ\psi, \psi' to

(3)Aψ=1,ψ TA T=1 T A \psi = 1, \quad \psi'^T A^T = 1^T

and if both of these exist, we define

(4)χ(C)= xψ x(= xψ x). \chi(C) = \sum_x \psi_x (= \sum_x \psi'_x).

End of aside.)

This is an important point (that χ\chi doesn’t seem to be local), because for groupoid cardinality there is indeed a local formula. If GG is a groupoid than one can define χ(G)\chi(G), in the Baez-Dolan way, by summing over the isomorphism classes of objects in GG:

(5)χ(G)= [x]ObG1|Aut(x)|. \chi(G) = \sum_{[x] \in Ob G} \frac{1}{|Aut( x )|}.

This is equivalent to summing over the objects themselves:

(6)χ(G)= xObG1|x|. \chi(G) = \sum_{x \in Ob G} \frac{1}{|x \rightarrow |}.

This latter formula has been touted by Toby earlier in this blog. I learnt it from Simon’s paper on groupoids (advertisement : this paper contains lots of other cool stuff related to Tom’s paper! As a sneek-peek, it suggests one shouldn’t think of the before-mentioned ψ\psi as column-vectors, but rather as sections of a line-bundle ). Anyhow, both these formulas are manifestly local - unlike Tom’s formula.

One might even try to define the Euler characteristic of an arbitrary category (not necessarily a groupoid) by the last formula, as Toby pointed out. Its not the same as Tom’s definition though; for the poset C=abC = a \rightarrow b we get

(7)χ Tom(C)=1,χ Toby(C)=32. \chi_{Tom}(C) = 1, \quad \chi_{Toby}(C) = \frac{3}{2}.

Anyhow, since Tom’s formula generalizes all sorts of other formulas, it must be the right one. But is it local?

Posted by: Bruce Bartlett on February 7, 2007 4:16 AM | Permalink | Reply to this

Re: deficit angles?

To try to understand this localness issue, why not take Gelfand’s advice and look at the simplest non-trivial example? So what’s the classifying space of your poset C=abC = a \to b?

Or is this not sufficiently non-trivial?

Posted by: David Corfield on February 7, 2007 8:59 AM | Permalink | Reply to this

Re: deficit angles?

So what’s the classifying space of your poset C=abC = a \to b?

That classifying space is the interval [0,1]\simeq [0,1] \subset \mathbb{R}.

Posted by: urs on February 7, 2007 3:47 PM | Permalink | Reply to this

Re: deficit angles?

By saying that the formula is “local” I just meant that it arises as the sum over all vertices of some quantity assigned to each vertex, i.e. that it has the form ξ(X)= Obj(X)f(x). \xi(X) = \sum_{\mathrm{Obj}(X)} f(x) \,. That’s reminiscent of the “local” formulas for the Euler characteristic of manifolds, which look like ξ(X)= Xdμ(x). \xi(X) = \int_X \; d\mu(x) \,.

So, to borrow M. Hopkin’s words: “the Euler characteristic of XX accumulates via integration of local data”.

But you are perfectly right that this quantity assigned to each vertex in Tom’s formula is itself determined by a rather “non-local” prescription.

But so is, for instance, the “deficit angle” : in order to determine it at a given vertex, one has to probe the vicinity of that vertex.

That’s the analogy I had in mind. But John’s comment might suggest that this analogy applies, if at all, not to XX but to BXB X.

Posted by: urs on February 7, 2007 3:28 PM | Permalink | Reply to this

Re: deficit angles?

Anyhow, both these formulas are manifestly local - unlike Tom’s formula.

Hm, so now I am not sure what you mean by “local”: the formula ξ(G)= xObG1 ycardhom(x,y) \xi(G) = \sum_{x \in \mathrm{Ob} G} \frac{1}{\sum_y \mathrm{card}\mathrm{hom}(x,y)} involves, under the first summation sign, a sum over lots of morphisms in the category, pretty much like the formula determining Tom’s “weighting” does.

What is it that distinguishes these two terms under the summation sign for you?

Posted by: urs on February 7, 2007 3:43 PM | Permalink | Reply to this

Locality of Euler characteristic

Urs wrote :

Hm, so now I am not sure what you mean by local.

By ‘locality’ I simply mean the following. Let χ\chi be a rule which assigns to each category CC a number χ(C)\chi(C):

(1)Cχ(C). C \mapsto \chi(C).

I would call χ\chi local if for each category CC (such that χ(C)\chi(C) is well-defined) one can express χ(C)\chi(C) as a local formula

(2)χ(C)= xOb(C)f(x,{x},{x}). \chi(C) = \sum_{x \in Ob(C)} f(x, \{x \rightarrow \}, \{ \rightarrow x\}).

That is, f(x,{x},{x})f(x, \{x \rightarrow \}, \{ \rightarrow x\}) only depends on xx and the hom’s out of and into xx. Also note that ff must be a universal function (valid for all CC… no cheating :-)).

I don’t think that Tom’s χ\chi is local in this sense. However there are a lot of categories for which χ\chi can be expressed locally (in the sense I defined above), as Tom points out below. For example, groupoids or finite skeletal categories with no non-identity idempotents.

In fact, it would be cool to prove a kind of ‘no-go’ theorem stating that any numerical invariant χ\chi of categories satisfying some basic properties (invariant under adjunction, behaves nicely w.r.t limits, etc.) must necessarily be non-local.

Posted by: Bruce Bartlett on February 7, 2007 9:20 PM | Permalink | Reply to this

Re: Locality of Euler characteristic

Okay, thanks.

So I guess the picture is that we think of two objects as being “neighbours” if there is at least one morphism between them, and “non-neighbours” when there is no morphism between them.

Then “local” in your and Tom’s sense (“depending only on source and target fibers over an object”) means

“depending only on the neighbourhood of the object”.

Posted by: urs on February 7, 2007 9:50 PM | Permalink | Reply to this

Re: Locality of Euler characteristic

This is similar to usage in topos theory, where “local” refers to slice categories. For example, a category \mathcal{E} is locally cartesian closed if the slice category /X\mathcal{E}/X is cartesian closed for each object XX of \mathcal{E}.

Posted by: Tom Leinster on February 7, 2007 11:28 PM | Permalink | Reply to this

Re: deficit angles?

Bruce wrote:

I don’t get why Tom’s formula is local. Perhaps I’m missing something!

I don’t think it’s local either, for the reasons that you give.

But it’s “sometimes local”. What I mean is that there’s a fairly large class of categories for which a local formula can be given.

For instance, suppose that CC is a finite skeletal category in which the only endomorphisms are the identities. Say that a path

(1)x 0f 1x 1f 2f nx n x_0 \stackrel{f_1}{\rightarrow} x_1 \stackrel{f_2}{\rightarrow} \cdots \stackrel{f_n}{\rightarrow} x_n

in CC is nondegenerate if no f if_i is an identity. Then CC has a unique weighting ww, given by

(2)w(x)= n(1) n(numberofnondegeneratepathsoflengthnoutofx). w(x) = \sum_n (-1)^n (number of nondegenerate paths of length n out of x).

I would call this a local formula. Why? Well, for each xx we have the coslice category x/Cx/C, and w(x)w(x) is defined entirely in terms of x/Cx/C. (Specifically, take the full subcategory C xC_x of x/Cx/C consisting of all the objects except the initial object 1 x1_x; then w(x)=1χ(C x)w(x) = 1 - \chi(C_x).)

A similar statement can be made in the more general case that CC has no idempotents except identities (my Theorem 1.4).

Posted by: Tom Leinster on February 7, 2007 3:54 PM | Permalink | Reply to this

Euler characterstic via rational behaviour

Thinking about Tom’s Euler characteristic nearly drove me insane… I still don’t understand it, but let me at least elaborate on something Tom said earlier, which provides a handy illustration.

Basically, weightings and coweightings are all about rational behaviour of people with respect to emigration and immigration.

I really did go insane, you say? Read on.

As you know, understanding the movement of people from country to country is becoming increasingly important. Suppose that the scientific community tries to study this problem. As usual a silly controversy develops, and they are split into two groups : those that believe that studying emigration (leaving a country) is more fundamental, and those that believe that studying immigration (entering a country) is more fundamental. These kind of petty arguments happen all the time in science.

These two groups stage two seperate conferences : the ‘emigration scientists’ go to the north pole, and the ‘immigration scientists’ go to the south pole, say.

At the ‘emigration’ conference, the mathematicians propose to proceed as follows. They reason that if people thought rationally , then the ‘attractiveness’ of a country would be an absolute quantity. Thus, for instance, a person from New York and a person from Trinidad would both agree on how nice Gabon is as a country.

“Bah!”, say the psychologists. “It will depend on where somebody comes from!”

To tell whether people are behaving rationally or not, the mathematicians construct the following simple test. They say, “Let |xb||x \rightarrow b| be the number of ways one can go from country xx to country bb, eg. on this seat on this plane, or on this seat in this car, etc.

If people behaved rationally, it must be possible to assign an absolute ‘attractiveness score’ ψ x\psi_x to each country xx, such that for all xx,

(1) bTargets(x)|xb|ψ b=1. \sum_{b \in Targets(x)} |x \rightarrow b| \psi_b = 1.

That is, summing over all the different countries bb which one can emigrate to from xx (including xx itself, of course!), and multiplying each country bb by its attractiveness and the number of ways one can reach bb from xx, must add up to 11. If this is not possible, then they are acting irrationally with respect to emigration ! Their opinion of Hong Kong must necessarily depend on whether they come from Japan or China.”

Such an assigment of numbers

(2)xψ x x \mapsto \psi_x

to the objects in a category CC, satisfying the ‘emigration equation’ above, is called a weighting on CC.

The immigration conference down at the South Pole runs similarly. Except there the mathematicians have a different test for rational behaviour: They say if people behaved rationally, it must be possible to assign an absolute ‘repulsiveness score’ ψ x\psi^x to each country xx, such that for all xx,

(3) aSources(x)|ax|ψ a=1. \sum_{a \in Sources(x)} |a \rightarrow x| \psi^a = 1.

That is, summing over all the different countries aa from which its possible to immigrate from to reach xx (including xx itself, of course!), and multiplying each country aa by its repulsiveness and the number of ways one can reach xx from aa, must add up to 11. If this is not possible, then they are acting irrationally with respect to immigration .

Such an assigment

(4)xψ x x \mapsto \psi^x

satisfying the ‘immigration equation’ above is called a coweighting on CC.

Eventually the scientists return from their conferences, and the politicians force them to put their petty differences aside, and to harmonize their appraoches. They decide to call the movement of people around the world rational if it admits both a weighting and a coweighting, i.e. if people behave rationally with respect to both emigration and immigration. In other words, there will be two absolute lists (available online) of the attractiveness and repulsiveness scores of countries, eg.:

(5) Attractiveness Repulsiveness China 3 2 Russia 1 2 ... \array{ & Attractiveness & Repulsiveness \\ China & 3 & -2 \\ Russia& -1 & -2 \\ ... & & }

Moreover, these numbers will be consistent in that they will satisfy the emigration and immigration equations. In true political fashion, the attractiveness of a country need bear no relation to its repulsiveness.

So there we go: if a category CC behaves rationally with respect to both emigration and immigration, then one defines its Euler characteristic as

(6)χ(C)= xψ x(= xψ x). \chi(C) = \sum_x \psi_x (= \sum_x \psi^x).

The cool thing is that some categories don’t behave rationally . Tom gives an example of such a category in Example 1.11 (d) of his paper. In that case, as scientists, we can only shrug and say ‘C’est la vie! People are silly sometimes!’

Posted by: Bruce Bartlett on February 7, 2007 11:09 PM | Permalink | Reply to this

Re: Euler characterstic via rational behaviour

Woops! When I said I would like to elaborate on something Tom said earlier, I was referring to this post.

Posted by: Bruce Bartlett on February 7, 2007 11:23 PM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

I love this!

Last night I made yet another attempt to find a nice way of thinking about (co)weightings. Flows on a surface, curvature, and various probabilistic constructions went through my mind. The best I could manage wasn’t as good or as well-developed as Bruce’s, and my real conclusion was “goddammit, surely we ought to be able to come up with a decent picture between us”. But for what it’s worth, here’s my scenario.

Take a directed graph. Think of the nodes as people, who each hold a sum of money (or as countries who each hold a number of people, maybe…). The edges are going to tell us to what extent people are linked: e.g. if there are lots of edges from person xx to person yy then there will be lots of opportunities for xx’s money (or debt) to be transferred to yy.

Take a weighting ψ\psi on the graph.

Suppose that each person possesses a certain amount of money (perhaps negative, just as in real life). A big process of money transferral now takes place: every person xx looks at the various edges f:xyf: x \rightarrow y pointing away from them, and gives a proportion ψ y\psi_y of their money to person yy. If there are many edges from xx to yy, then yy will repeatedly receive this amount from xx. Since ψ\psi is a weighting, all this is possible - no money is conjured up magically or vanishes into thin air.

This conservation of money corresponds to Bruce’s rationality, I guess. But it’s a somewhat artificial scenario. Could something be done with conservation of energy etc instead?

The money transfer process is of course linear: given a vector vv specifying how much money each person has, the amount everyone has after the transfer is specified by MvM v (where MM is a certain square matrix that’s easy to figure out). The rows (or columns?) of MM each add up to 11, which I think means it’s called a ‘Markov matrix’ - doubtless someone else knows this.

PS - Very pleased to see Gabon mentioned.

Posted by: Tom Leinster on February 7, 2007 11:59 PM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Take a directed graph. Think of the nodes as people, who each hold a sum of money…

This is cool - I think it actually introduces an important ingredient into the mix which had been confusing me, and which my analogy above fails to account for. Namely, in some sense at each object xx you actually have two numbers : the first is the amount of money that xx has, and the second is the proportion ψ x\psi_x which he/she must give away when the Great Money Transfer takes place.

Somehow the amount of money is a hidden variable. Mmm… lol! This stuff has driven me crazy, I refuse to try and understand it any more. Hopefully another blogger will ‘weigh in’ with yet another explanation!

Posted by: Bruce Bartlett on February 8, 2007 2:26 AM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Okay, one last question. If someone draws a bunch of dots on a page and connects them with a bunch of arrows (with at least one arrow going from each object to itself), is there always a category having exactly these objects and arrows?

That is, is there some constraint between the sizes of the hom-sets |xy||x \rightarrow y| in a finite category? The only one I know of is that every object must have at least one arrow going back to itself, otherwise there can’t possibly be identities.

Can you draw an ‘uncategorizable’ graph? A graph which admits no associative composition operation on the morphisms? You can’t do it for one object graphs : you can always turn such a thing into a group, since there exists a group of every order.

Posted by: Bruce Bartlett on February 8, 2007 2:35 AM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Actually, not every directed graph with loops at each node is a category. The transitive closure, however, will be.

One example occurs in the Haskell community; while explaining categories to the computer science crowd:
The Haskell Wikibook: Category theory
The exercises a page or two down this page gives a category with two objects, A,B and identities on both, and then Hom(A,B)={g}, Hom(B,A)={f,h}.

For this category, the composite gf has to be id_B, and the composite hg has to be id_A, since no other possibilities exist for the composite arrow; and then hgf takes on different values depending on bracketing.

Posted by: Mikael Johansson on February 8, 2007 10:25 AM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Ah, you mean “for this diagram to be a category” (added words italicized).

Posted by: John Armstrong on February 8, 2007 3:51 PM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Thanks! This is a great example. The Haskell Wikibook : Category theory page is really cool. I had never come across it before.

Posted by: Bruce Bartlett on February 8, 2007 5:48 PM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

The Haskell crowd in general do some seriously funky thinking around categories, since it turns out that one of the more comfortable and clear ways to talk about code in that language is to view it as morphisms in a very specific cartesian closed category.

I very much recommend looking around at the various wiki’s that get maintained and linked in from that site - there’s a lot of very handson and explicit stuff out there.

Posted by: Mikael Johansson on February 9, 2007 8:51 AM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Can you draw an ‘uncategorizable’ graph?

You’ve noted that every node must have a loop. This is the nullary part of a binary/nullary pair whose binary part we also need: Whenever there’s an edge from a to b and an edge from b to c, then there must be an edge from a to c. That is, the (directed adjacency relation of the) graph must be not only reflexive but also transitive. As far as ‘the sizes of the hom-sets’ goes, of course, this only distinguishes zero from non-zero (or better, it sees only the underlying truth value of each hom-set).

It’d be nice if this were all that’s necessary, but the counterexample from the Haskell wiki destroys that hope!

Posted by: Toby Bartels on February 9, 2007 6:09 AM | Permalink | Reply to this

Re: Euler characteristic via rational behaviour

Your ‘Markov matrix’ is sometimes called row-stochastic matrix (‘column-stochastic’ if it’s the columns which sum to one, ‘doubly stochastic’ if it’s both rows and columns).

You’re coming very close to material I mentioned in my Category Theoretic Probability Theory post. In chapter 6 of Guy Lebanon’s thesis we read about certain row-stochastic matrices.

Perhaps you’re right to have ‘probabilistic constructions’ going through your head.

Posted by: David Corfield on February 8, 2007 8:41 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

In TWF, John wrote:

there are lots of important categories whose zeta function is not invertible: for example, any group other than the trivial group. So, Leinster needs a somewhat more general definition to handle these cases. I don’t feel I deeply understand it, but I’ll explain it, just for the record.

I might not deeply understand it either, but I’ll try to explain it as far as I can.

First, the definitions. (John also gives them in TWF.) Let XX be a finite category. A weighting on XX is a function ww on obXob X (with values in the rationals, say) such that for each object xXx \in X,

(1) yXHom(x,y)w(y)=1. \sum_{y \in X} \mid Hom(x, y)\mid w(y) = 1.

A coweighting on XX is a function w *w^* on obXob X such that for each xx,

(2) yXHom(y,x)w *(y)=1. \sum_{y \in X} \mid Hom(y, x)\mid w^*(y) = 1.

The category XX might have no weightings, one weighting, or many weightings, and similarly for coweightings. But if XX has at least one weighting and at least one coweighting then

(3) xw(x)= xw *(x) \sum_x w(x) = \sum_x w^*(x)

for any weighting ww and coweighting w *w^*, and this common value is called the Euler characteristic of XX.

OK, so what’s a weighting, really?

First Answer This is how I stumbled on the definition of Euler characteristic.

Everyone knows that if AA and BB are finite subsets of some larger set then

(4)|AB|=|A|+|B||AB|. |A \cup B| = |A| + |B| - |A \cap B|.

You can understand ABA \cup B as a pushout: not just any old pushout of finite sets, but one in which the two functions along which you’re pushing out are injective.

Everyone also knows that

(5)|A/G|=|A|/|G| |A/G| = |A|/|G|

when AA is a finite set acted on by a finite group: not any old how, but where the action is free. This can also be understood as a colimit (of the functor from the one-object category GG to FinSet that corresponds to the action).

I wanted to find a similar formula for any type of finite colimit.

Of course, we’ll need to impose some restriction generalizing the two conditions in italics above. I don’t want to go into this right now, so trust me: there’s a sensible condition on functors F:XFinSetF: X \rightarrow \mathbf{FinSet} that does the job. Call such functors good.

The result is that for any finite category XX and good functor F:XFinSetF: X \rightarrow \mathbf{FinSet},

(6)|lim F|= xw(x)|F(x)| | \lim_\rightarrow F | = \sum_x w(x) |F(x)|

where ww is any weighting on XX. (So it’s a weighted sum - hence the name.)

When you’re calculating colimits, a weighting therefore tells you how much each object matters - or how much it counts for. So the total weight tells you how much the whole category matters, or how much it counts for. This is its Euler characteristic/cardinality.

The functor XSetX \rightarrow \mathbf{Set} with constant value 11 isn’t usually good, but still you can ask: how big would its colimit be if it were good? The answer is again the Euler characteristic.

Of course, you can tell a similar story with coweightings, using contravariant functors on XX. Maybe someone has a good explanation for why both calculations of “how much the category counts for” give the same result.

Second Answer This involves the Lefschetz number of an endofunctor.

Let XX be a finite category and TT a functor XXX \rightarrow X. Let Fix(T)\mathbf{Fix}(T) be the category of (strict) fixed points of TT, whose objects are the objects xx of XX satisfying T(x)=xT(x) = x, and whose arrows are the arrows ff of XX satisfying T(f)=fT(f) = f. The Lefschetz number of TT is the Euler characteristic of Fix(T)\mathbf{Fix}(T).

This says that the Lefschetz number of TT is the sum of the weights of the fixed points of TT. But in topology, the Lefschetz number of a continuous self-map tt of a space is the sum of the indices of the fixed points of tt! So weight is like index.

There are two subtleties: that the same can also be said of coweights, and that there are in general many weightings and coweightings, so it’s wrong to say “the” weight (as I just did). But I feel there ought to be something to this analogy.

Posted by: Tom Leinster on February 6, 2007 7:50 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Above Tom Leinster explained how to compute cardinalities of colimits of functors with values in finite sets.

For any good functor with values in finite sets F:CFinSet F : C \to \mathrm{FinSet} and for any weighting ww on CC, we have |limF|= xw(x)|F(x)|. |\mathrm{lim} F| = \sum_x w(x)|F(x)| \,.

When I first read this comment, I didn’t appreciate its full impact for our discussion of quantization-by-abstract-nonsense.

Maybe Bruce Bartlett completely saw the meaning of this already. A comment by him in another thread finally triggered my understanding. So, for the record: in another thread #, I was rambling on about how quantization by the path integral would look much more natural if we assumed that the “action” associated to a trajectory of a system were not regarded as a number, cc, but as a set SS with cardinality |S|=c|S| = c.

In that context we discussed Set\mathbb{C}\mathrm{Set}s, but never mind for the moment. Let’s just replace \mathbb{C} by the trivial field and consider just finite sets here.

We also discussed at length how we want to think of those trajectories as living in a category, with morphisms connecting different histories of the system in an appropriate way.

So our action would in fact be a functor S:histSet. S : \mathrm{hist} \to \mathrm{Set} \,.

The “path integral” of such a set-valued action should then be nothing but the colimit of this functor exp histS:=colim histS. \exp \int_{\mathrm{hist}} S := \mathrm{colim}_\mathrm{hist} S \,.

(We may equivalently think of this colimit as computing some “space of sections” sect=[1,S]\mathrm{sect} = [1,S] along the lines we have discussed a lot lately.)

I did talk about how making the ordinary action take values in sets instead of in numbers, the procedure of quantization becomes a completely canonical one, in that, after specifying the basic data (categories called parameter space, target space, configuration space, history space, and a certain functor on target space), the formalism takes care of all the rest.

However, I did not quite appreciate how this is related to Tom’s result, because I was thinking, for simplicity, of examples where the category of histories is taken to be discrete. Now I see that Tom’s result is precisely the result that is needed to make the quantization-by-abstract-nonsense program go through even for the general cases.

The above equation says nothing but that the weighting on the category of histories plays exactly the role of the measure appearing in the path integral in quantum mechanics textbooks.

Thanks to Bruce for making me see this!

Posted by: urs on February 22, 2007 10:01 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

I wrote:

(We may equivalently think of this colimit as computing some “space of sections”

Whoops, now that these functors take values in sets instead of vector spaces, I should be more careful here: in particular, it is the space of cosections that would yield the colimit.

What I mean is this:

For tra:CFinSet \mathrm{tra} : C \to \mathrm{FinSet} a functor, its space of (flat) cosections would be its co-push-forward to a point, i.e. the functor q(tra) q(\mathrm{tra}) such that Hom(tra,p *S)Hom(q(tra),S) \mathrm{Hom}(\mathrm{tra},p^*S) \simeq \mathrm{Hom}(q(\mathrm{tra}),S) where SS is any set, regarded as a functor S:FinSet S : {\bullet} \to \mathrm{FinSet} and where p:C. p : C \to {\bullet} \,. This yields q(tra)=colim Ctra. q(\mathrm{tra}) = \mathrm{colim}_C tra \,.

Posted by: urs on February 23, 2007 8:53 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Sorry if my question is trivial.
If we get morphism from one morphism to another we are getting 2-category correct ?
But what would happen if we have morphism from object to morphism? I’m asking because such construction is in fact represent logic gate in the circuit. Is there a name for such construction and some interesting results about it ? Is it connected to projective geometry ?

Posted by: serg271 on February 7, 2007 8:10 AM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Something that assigns a morphism to each object. Hmm… Sounds like a natural transformation to me.

Posted by: John Armstrong on February 7, 2007 1:27 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

what would happen if we have morphism from object to morphism?

John Armstrong interpreted this # as asking: “what is it that assigns a morphism to an object”. Indeed, that would most naturally be a natural transformation.

But if you rather want something of the form xf(yz), x \stackrel{f}{\to} (y \to z) \,, which is an arrow from a point to an arrow, then I would suggest to think of the point xx as the identity arrow on xx x:=(xIdx), x := (x \stackrel{\mathrm{Id}}{\to} x) \,, as one can naturally do. Then the ff above would again be a 2-morphism.

Posted by: urs on February 7, 2007 3:53 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Yep, that is exactly what I was looking for.

Posted by: serg271 on February 7, 2007 7:03 PM | Permalink | Reply to this
Read the post QFT of Charged n-particle: Chan-Paton Bundles
Weblog: The n-Category Café
Excerpt: Chan-Paton bundles from the pull-push quantization of the open 2-particle.
Tracked: February 7, 2007 8:16 PM

Re: This Week’s Finds in Mathematical Physics (Week 244)

It’s interesting to do some actual calculations for graphs. Here are some examples. (I’ll assume for the most part that all vertices have trivial automorphism group.) It’s important, particularly if we are thinking about graph Laplacians, to remember that the ζ\zeta operator covers all paths between vertices, not just all edges.

Solving for the weights step by step

Consider the very simple graph

x 1x 1x 2x 3x 4x 5x_1\rightarrow x_1\rightarrow x_2\rightarrow x_3\rightarrow x_4\rightarrow x_5

We solve ζϕ=1\zeta\phi=1 step by step, starting at the left.

ϕ 1=1\phi_1 = 1

This comes from the “identity” operator on the vertex x 1.x_1.

ϕ 1+ϕ 2=1\phi_1 + \phi_2 = 1

The contribution from ϕ 1\phi_1 comes from the fact that there is one path from x 1x_1 to x 2.x_2. The ϕ 2\phi_2 comes from the “identity” on x 2.x_2.

Hence ϕ 2=0.\phi_2 = 0.

ϕ 1+ϕ 2+ϕ 3=1\phi_1+\phi_2+\phi_3 = 1

There is one path from x 1x_1 to x 3x_3, one path from x 2x_2 to x 3,x_3, and the identity on x 3.x_3.

So we get ϕ 3=0,\phi_3 = 0, and obviously this continues for all later ϕ i.\phi_i.

Summing over all the ϕ i,\phi_i, we get a total of 1.

What about converging inputs?

E.g.

x 1,x 2x 3x 4x 5x_1,x_2\rightarrow x_3\rightarrow x_4\rightarrow x_5 with both x 1x_1 and x 2x_2 having edges into x 3x_3 (sorry to skimp on the Latex art … )

We have

ϕ 1=1\phi_1 = 1

ϕ 2=1\phi_2 = 1

ϕ 1+ϕ 2+ϕ 3=1\phi_1 + \phi_2 + \phi_3 = 1

hence ϕ 3=1\phi_3 = -1

Then

ϕ 1+ϕ 2+ϕ 3+ϕ 4=1\phi_1 + \phi_2 + \phi_3 + \phi_4=1

So ϕ 4=0\phi_4 = 0 and so on downstream.

We see the condition ζϕ=1\zeta\phi=1 steadily accumulating the Euler characteristic of the subgraph consisting of the “drainage basin” of a vertex.

A couple more examples:

x 2 x 4 x 1 x 6 x 3 x 5 \array{ & &x_2&\rightarrow&x_4\\ &\nearrow& & & &\searrow\\ x_1& & & & & &x_6\\ &\searrow& & & &\nearrow\\ & &x_3&\rightarrow&x_5\\}

ϕ 1=1\phi_1 = 1

ϕ 1+ϕ 2=1\phi_1 + \phi_2 = 1

So ϕ 2=0\phi_2 = 0

ϕ 1+ϕ 3=1\phi_1 + \phi_3 = 1

So ϕ 3=0\phi_3 = 0

ϕ 1+ϕ 2+ϕ 4=1\phi_1 + \phi_2 + \phi_4 = 1

So ϕ 4=0\phi_4 = 0

ϕ 1+ϕ 3+ϕ 5=1\phi_1 + \phi_3 + \phi_5 = 1

So ϕ 5=0\phi_5 = 0

Finally,

ϕ 1+ϕ 2+ϕ 4+ϕ 1+ϕ 3+ϕ 5+ϕ 6=1\phi_1 + \phi_2 + \phi_4 + \phi_1 + \phi_3 + \phi_5 + \phi_6 = 1

Note that ϕ 1\phi_1 appears twice, since there are two paths from x 1x_1 to x 6x_6.

We get ϕ 6=1,\phi_6 = -1, and the total sum ϕ 1+ϕ 2+ϕ 3+ϕ 4+ϕ 5+ϕ 6=0.\phi_1 + \phi_2 + \phi_3 + \phi_4 + \phi_5 + \phi_6 = 0.

Our last graph is this:

x 2 x 4 x 1 x 6 x 3 x 5 \array{ & &x_2&\rightarrow&x_4\\ &\nearrow& & & &\searrow\\ x_1& & &\searrow& & &x_6\\ &\searrow& & & &\nearrow\\ & &x_3&\rightarrow&x_5\\}

ϕ 1=1\phi_1 = 1

ϕ 1+ϕ 2=1\phi_1 + \phi_2 = 1

ϕ 1+ϕ 3=1\phi_1 + \phi_3 = 1

ϕ 1+ϕ 2+ϕ 4=1\phi_1 + \phi_2 + \phi_4 = 1

ϕ 1+ϕ 3+ϕ 1+ϕ 2+ϕ 5=1\phi_1 + \phi_3 + \phi_1 + \phi_2 + \phi_5 = 1

ϕ 1+ϕ 2+ϕ 4+ϕ 1+ϕ 3+ϕ 5+ϕ 1+ϕ 2+ϕ 6=1\phi_1 + \phi_2 + \phi_4 + \phi_1 + \phi_3 + \phi_5 + \phi_1 + \phi_2 + \phi_6 = 1

Note that, in the last line, ϕ 1\phi_1 appears three times, since there are three paths from x 1x_1 to x 6,x_6, ϕ 2\phi_2 appears twice, ϕ 5\phi_5 once, etc.

So ϕ 1=1\phi_1 = 1

ϕ 2=0\phi_2 = 0

ϕ 3=0\phi_3 = 0

ϕ 4=0\phi_4 = 0

ϕ 5=1\phi_5 = -1

ϕ 6=1\phi_6 = -1

The sum ϕ 1+ϕ 2+ϕ 3+ϕ 4+ϕ 5+ϕ 6=1\phi_1 + \phi_2 + \phi_3 + \phi_4 + \phi_5 + \phi_6 = -1

We see that if there are two (or more) paths from aa to b,b, then we have a loop, and an extra contribution from aa (or rather from its drainage basin) gets subtracted.

Inverting the matrix by series

Let’s take another approach, and represent ζ\zeta as I+SI + S, where II is the identity and SS has zero diagonal. (We’re still assuming only trivial automorphisms of the identity.)

Then ζ 1=1I+S\zeta^{-1}={1\over{I+S}} =IS+S 2S 3=I-S+S^2-S^3\dots

The term in S nS^n represents paths from one vertex to another which are made up of nn sub-paths (not necessarily nn edges!) Because all paths are finite (we have a finite category), SS is nilpotent, so the series ends after a finite number of terms. We can see the sum over chains of paths that appears in Tom’s Theorem 1.4.

(In fact we can do better. Suppose we allow vertex automorphisms. Then we can write

ζ=Λ+S\zeta = \Lambda + Swhere Λ\Lambda is a diagonal matrix with Λ ii=|Aut(x i)|\Lambda_{ii}= \vert Aut(x_i)\vert. We have

ζ=Λ+S=Λ(I+Λ 1S)\zeta = \Lambda + S = \Lambda(I+\Lambda^{-1}S)

so

ζ 1=1I+Λ 1SΛ 1\zeta^{-1}={1\over{I+\Lambda^{-1}S}}\Lambda^{-1}

=(IΛ 1S+(Λ 1S) 2(Λ 1S) 3)Λ 1=(I-\Lambda^{-1}S + (\Lambda^{-1}S)^2 - (\Lambda^{-1}S)^3) \Lambda^{-1}

So in our series, SS gets premultiplied by the Λ 1,\Lambda^{-1}, which means that each sub-path in any chain of paths represented in S nS^n gets multiplied by the reciprocal of the number of automorphisms of its source; the final factor of Λ 1\Lambda^{-1} accounts for the automorphisms of the target of the final sub-path.

Also, looking through the solutions for the weightings in the first part of this post, it’s easy to see how multiplying the “identity” term in each equation by some whole number, to account for the existence of more than just the identity automorphism, will result in extra factors of the reciprocals of the orders of the automorphism groups of the vertices along each chain.)

Combinatorial interpretation of the series

The alternating sum is secretly (or not-so-secretly) applying an inclusion-exclusion principle to extract edges from paths, using the inclusion relation among paths to do so.

Consider summing the entries of ζ 1\zeta^{-1} column by column. Each column relates to one vertex.

If all the paths coming into a vertex have just one edge, then S 2=0,S^2=0, and we are done: the number of paths is the number of edges.

If some of the paths have two (or more) edges, though, then the proximal edges will be overcounted if we simply count paths: once for the proximal vertex they are attached to, and once for the distal vertex further up the path (and again for each further vertex if the path is longer still.) So after adding on all paths, we need to remove paths which can be chopped into two (non-trivial) paths, since these will include the two-edge paths.

If some of the paths have more than two edges, we will have overcounted the two-edge paths, though, because paths with three or more edges can be divided into two sub-paths in more than one way. So we need to add on paths chopped into three (non-trival) sub-paths. And so on.

After thinking about this a little, we realise that, if ζ\zeta contains the details of all paths, then ζ1\zeta{-1} should do something corresponding for just the edges. And it does: if there are nn edges (not paths!) connecting ab,a\rightarrow b, with ab,a\ne b, then (if all vertex automorphism groups are trivial) ζ ab 1=n.\zeta^{-1}_ab = -n. The entries on the diagonal are all 1, of course—one 1 for each vertex. So it is not very surprising that the sum of the entries of ζ 1\zeta^{-1} is |V||E|\vert V\vert - \vert E\vert

One conclusion to draw from this is that it is ζ 1,\zeta^{-1}, not ζ,\zeta, that encodes local (i.e. edgewise) information, and is therefore most closely analogous to the Laplacian. By contrast, ζ\zeta contains pathwise information is more analogous to the sort of two-point correlation function thingy that you get by inverting the Laplacian.

Of course, to invert the Laplacian, you need to firmly sweep its zero eigenvalues under the carpet. The zero eigenvalues contain, precisely, essential information about the cohomology of the graph. So the difference between the Laplacian and ζ\zeta along the diagonal is one of the key things that somehow makes that implicit information about the Euler characteristic a bit more accessible. In some way I don’t understand …

Of course, the Laplacian is operating in an environment we have paths going both ways along edges, in which situation we are dealing not only with ζ\zeta and ζ 1,\zeta^{-1}, but also with their adjoints, which correspond to adjoint (or ‘weak inverse’) paths, i.e. paths going along the same route in the opposite direction.

As Tom Leinster has said, the weightings contain the “contribution” of each object (i.e. vertex) to the total Euler characteristic, that is |V||E|\vert V\vert-\vert E\vert for the vertex, where |V|\vert V\vert is of course 1 for a single vertex, and |E|\vert E \vert is the number of incoming edges for that vertex (or outgoing edges if we are using the adjoint weighting).

Based on what I’ve said in this post, I’m not sure how invertibility for ζ\zeta fails. I don’t think it can fail for a finite graph with only finite paths (i.e. no loops), since ζ\zeta will always be the sum of a strictly positive diagonal matrix and a nilpotent matrix. For instance, though the zeta element of the group algebra over Z 2Z_2 isn’t invertible, the zeta-matrix sure is, since it’s simply the 1×11\times 1 matrix with single entry 22.

Posted by: Tim Silverman on February 7, 2007 8:30 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Tim writes:

So in our series, SS gets premultiplied by the Λ 1\Lambda^{-1}, which means that each sub-path in any chain of paths represented in S nS^n gets multiplied by the reciprocal of the number of automorphisms of its source;

Now assume we are on a connected groupoid, such that Λ=λId\Lambda = \lambda \mathrm{Id}.

Then combining this with Bruce’s remark implies that

λ\lambda is to be interpreted as a coupling constant!

Posted by: urs on February 8, 2007 8:10 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Are you sure it isn’t a symmetry factor? …

Posted by: Tim Silverman on February 8, 2007 10:16 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Are you sure it isn’t a symmetry factor? …

No…

Posted by: urs on February 9, 2007 9:16 AM | Permalink | Reply to this
Read the post Isham on Arrow Fields
Weblog: The n-Category Café
Excerpt: On Chris Isham's notion of an "arrow field" on a category.
Tracked: February 8, 2007 7:23 PM

Local formulas for Euler characteristic

If you’re interested in finding formulas for Euler characteristic that are “the integral of a local property”, as the Gauss-Bonnet formula is, then you might like this (courtesy again of Misha Feigin).

It’s from a paper by Alexander Gaifullin, Computation of characteristic classes of a manifold from its triangulation. He gives a local formula for the Euler characteristic of a finite simplicial complex. Although it’s very simple, it’s apparently new.

We’ll need two definitions:

  • Given a simplicial complex XX and a vertex xx of XX, let link X(x)link_X(x) be the simplicial complex whose nn-simplices are the (n+1)(n + 1)-simplices of XX containing xx.
  • Given a finite simplicial complex YY, let |Y n||Y_n| be the number of nn-simplices of YY, and write log(Y)=1|Y 0|/2+|Y 1|/3|Y 2|/4+. log(Y) = 1 - |Y_0|/2 + |Y_1|/3 - |Y_2|/4 + \cdots. (I’m only calling it loglog because it’s a kind of logarithmic generating function: don’t read anything deep into the name.)

Gaifullin proves that for a finite simplicial complex XX, χ(X)= xlog(link X(x)) \chi(X) = \sum_x log(link_X(x)) where the sum is over vertices xx. The proof is completely elementary.

How does this fit in with what we’re doing? Well, a big difference between simplicial complexes and simplicial sets is that in a simplicial complex, the simplices don’t have orientations, but in a simpicial set, they do. Gaifullin’s proof works by counting the number of pairs (nsimplex,vertexofit) (n-simplex, vertex of it) in two different ways (for each nn). The oriented analogue would be to count (nsimplex,firstvertexofit) (n-simplex, first vertex of it) in two different ways - but this is of course the same as counting the number of nn-simplices in two different ways. The denominators in the definition of “log” are the number of vertices in an nn-simplex, so in our oriented context these all become 1.

I think the upshot is that if we try to do an oriented (simplicial-setty) Gaifullin then we end up with χ(X)= xw(x) \chi(X) = \sum_x w(x) where the sum is over vertices xx, and w(x)w(x) is defined by the “local” formula (2), and XX is assumed to be the nerve of a finite skeletal category containing only trivial endomorphisms.

Posted by: Tom Leinster on February 8, 2007 8:54 PM | Permalink | Reply to this

Re: Local formulas for Euler characteristic

If you’re interested in finding formulas for Euler characteristic that are “the integral of a local property”, as the Gauss-Bonnet formula is, then you might like this

Thanks! Very nice.

I cannot yet connect this directly to things I already know , but I’ll keep this in mind.

Posted by: urs on February 9, 2007 9:36 AM | Permalink | Reply to this

Re: Local formulas for Euler characteristic

Thanks Tom! This is a great reference. Gaifullin gives local formulas for all sorts of things like Stiefel-Whitney classes, Pontryagin classes,… all from a triangulation.

It shows a nice way to understand the Euler characteristic of a category, as you explained. But I’m also interested because it seems like a great source of examples for ‘lattice-type’ TQFT’s! In particular, the diagrams which Gaifullin draws on pages 21-24 of his paper look just like the ‘higher Pachner moves’ which describe the higher category coherence isomorphisms you need for higher dimensional TQFT’s.

Posted by: Bruce Bartlett on February 9, 2007 5:38 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

Some of this discussion has revolved around the question: how does the Euler characteristic of a category relate to that of its classifying space?

Even to begin to answer that question, we need to know what the classifying space of a category is. There’s more to say than I’d realized.

Category theorists like to think of the classifying space construction as the composite of ‘nerve’ and ‘geometric realization’: B=(CatNSimpSet||Top). B = (\mathbf{Cat} \stackrel{N}{\to} \mathbf{SimpSet} \stackrel{| |}{\to} \mathbf{Top}). This is good because it’s a nice and functorial and it determines the classifying space BAB A of a category AA up to homeomorphism. Arguably, it’s bad because it gives a special role to simplicial sets: you can imagine doing the same kind of thing with (say) cubical sets in their place, and obtaining a BAB A that’s only homotopy equivalent to the simplicial BAB A.

But there’s something else called ‘classifying space’. Algebraic topologists mostly seem to be interested in the classifying space of a group. This is defined as follows. Let GG be a group. Choose a contractible space EGE G acted on by GG, such that the action of GG on EGE G is free - or more accurately, the action of GG on the underlying set of GG is free. Let BG=EG/GB G = E G/G. It’s a theorem that this does define BGB G (and EGE G) uniquely up to homotopy equivalence; indeed, this follows from a certain up-to-homotopy universal property of the quotient map EGBGE G \to B G.

This definition is (arguably) good because it doesn’t involve a choice of shape (simplicial, cubical etc) and because you have some freedom: e.g. once you’ve spotted that Z acts freely on R, you know that the classifying space of Z is R/Z =S 1= S^1.

The two definitions are compatible when you view a group GG as a one-object category, in the sense that the two answers ‘BGB G’ are homotopy equivalent.

(I’ll sketch the proof. If the single object of GG is called aa, we have a slice category G/aG/a [sometimes called the ‘action groupoid’, I think: its objects are the elements of GG and there’s exactly one map from any object to any other]. We also have a forgetful functor G/aGG/a \to G. Applying the category theorists’ BB to both categories and the functor between them, this gives a continuous map B(G/a)BGB(G/a) \to B G. Let EG=B(G/a)E G = B(G/a). Since G/aG/a has a terminal object, EGE G is contractible. Also, there’s an obvious action of GG on G/aG/a, hence on B(G/a)=EGB(G/a) = E G, and the orbit-space of this action is BGB G.)

The problem with the second definition of classifying space is that it’s restricted to groups. I’ve been wondering if it’s possible to extend it to all small categories. I think the answer is yes, but I haven’t filled in all the details yet. Quite likely this has been worked out before (a long time ago?). If so, please tell me about it!

Here’s how I think it goes. We need to generalize the following concepts used in the definition of BGB G:

  • action of GG on a space
  • free action of GG on a set
  • orbit-space of a GG-space.

We also need to generalize the following results:

  • that the “category theorist’s BGB G” arises as the space of orbits of a free GG-action on a contractible space (i.e. the result proved in the parenthetical paragraph above)
  • that if EE and EE' are contractible spaces acted on freely by GG then E/GE/G and E/GE'/G are homotopy equivalent.

I think I can generalize all but the last from groups to categories. Here goes.

Let AA be a small category.

  • An ‘action’ of AA is a functor ATopA \to \mathbf{Top}.
  • A functor ASetA \to \mathbf{Set} is ‘free’ if it’s a coproduct of representables.
  • The orbit-space of a functor E:ATopE: A \to \mathbf{Top} is its colimit colimETopcolim E \in \mathbf{Top}.

That generalizes the concepts. (In other words, if you take AA to be a group then you get back the concepts listed before.) I therefore define a space BB to be a classifying space of AA if there exists a functor E:ATopE: A \to \mathbf{Top} such that

  1. the underlying set-valued functor ASetA \to \mathbf{Set} is a coproduct of representables
  2. each space E(a)E(a) is contractible
  3. colimEcolim E is homeomorphic to BB.

Now I’ll sketch a proof that the familiar BAB A (the geometric realization of the nerve) is a classifying space of AA in this sense. Define E:ATopE: A \to \Top by E(a)=B(A/a)E(a) = B(A/a), where A/aA/a is the slice category and B( )B( ) is the category theorist’s classifying space construction. A little calculation shows that the underlying set-valued functor is a coproduct of representables. Each category A/aA/a has a terminal object, so E(a)=B(A/a)E(a) = B(A/a) is contractible. Another little calculation shows that colimE=BAcolim E = B A. Voilà!

Now I’m going to read Ieke Moerdijk’s Classifying spaces and classifying topoi in the search for enlightenment.

Posted by: Tom Leinster on February 20, 2007 8:49 PM | Permalink | Reply to this
Read the post Canonical Measures on Configuration Spaces
Weblog: The n-Category Café
Excerpt: On how the Leinster weighting on a category might provide path integral measures in physics.
Tracked: March 8, 2007 9:43 AM

Re: This Week’s Finds in Mathematical Physics (Week 244)

I’ve been thinking a bit about applying Tom Leinster’s methodology for Euler characteristics to certain kinds of infinite categories, namely categories freely generated by directed graphs with cycles in them. The problem is that the zeta operator is defined in terms of the number of morphisms (here, paths) between objects, which in this case is infinite.

The obvious way to approach this is to downgrade the weight of longer paths by some factor p np^n for paths of length nn, do our totalling, inversion, summing, etc, and then let p1p\rightarrow 1 at the end, hoping to get out something finite. The last time I tried this, all I proved was that, yep, the zeta operator sure does have a lot of infinities in it. But this time round I seem to have done a bit better.

Let’s try a simple cyclical graph with two vertices and two edges between them, one pointing each way:

\array {&\rightarrow\\ \cdot&&\cdot\\ &\leftarrow}

The paths going from the lefthand vertex to itself are of lengths 00, 22, 44, etc, so lets total them up, but multiplying each by p np^n where nn is its length, getting 1+p 2+p 4=11p 21+p^2+p^4\dots={{1}\over{1-p^2}}.

This is the same for the paths from the righthand vertex to itself. Finally, from the lefthand vertex to the righthand vertex, or vice versa, we have the same, but with one extra, initial edge.

So let’s say our zeta operator is

ζ=11p 2(1 p p 1)\zeta={ {1}\over {1-p^2}}\left(\array{1&p\\p&1}\right)

The determinant is 1p 2(1p 2) 2=11p 2{ {1-p^2}\over {(1-p^2)^2}}={ {1}\over {1-p^2}}.

So the inverse is (1 p p 1)\left(\array{1&-p\\-p&1}\right). Letting p1p\rightarrow 1 and summing all matrix entries (or vice versa!) we get 00. Which is correct for the Euler characteristic of the graph. So far, so good …

Now we try adding an extra arrow from left to right:

\array {&\rightarrow\\ &\rightarrow\\ \cdot&&\cdot\\ &\leftarrow}

This time, there are two ways to get from left to right, so the numbers of paths are 11 of length 00, 22 of length 22, 44 of length 44, 88 of length 66 and generally 2 n2^n of length 2n2n. So totalling, we get 1+2p 2+4p 4+=112p 21+2p^2+4p^4+\dots={{1}\over{1-2p^2}}.

So we get a zeta operator

ζ=112p 2(1 2p p 1)\zeta={{1}\over{1-2p^2}}\left(\array{1&2p\\p&1}\right)

We can actually let p1p\rightarrow 1 already to get (1 2 1 1)\left(\array{-1&-2\\-1&-1}\right) and invert to give us

(1 2 1 1)\left(\array{1&-2\\-1&1}\right)

Totalling entries gives an Euler characteristic of 1-1, which is correct for the graph.

In the same way, if we have some general number AA of arrows going to the right, we can see we get a cycle factor of 11Ap 2{{1}\over{1-Ap^2}} and a zeta operator of

11Ap 2(1 A 1 1){{1}\over{1-Ap^2}}\left(\array{1&A\\1&1}\right), with determinant 11Ap 2{{1}\over{1-Ap^2}} and inverse

(1 Ap p 1)\left(\array{1&-Ap\\-p&1}\right).

Letting p1p\rightarrow 1, we get

(1 A 1 1)\left(\array{1&-A\\-1&1}\right).

Totalling, we get 1A1-A. Which, again, is correct for the graph.

If we also have some number BB arrows going to the left, then each zigzag back and forth multiplies the number of paths by ABAB, giving a cycle factor of 11ABp 2{{1}\over{1-ABp^2}}, a zeta operator

ζ=11ABp 2(1 Ap Bp 1)\zeta={{1}\over{1-ABp^2}}\left(\array{1&Ap\\Bp&1}\right)

with inverse

(1 Ap Bp 1)\left(\array{1&-Ap\\-Bp&1}\right)

and Euler characteristic 2AB2-A-B, which is again right for the graph.

Now it should be clear that the inverse matrix is still equal to the identity minus the adjacency matrix (possibly mutiplied by a factor pp), IpA\mathbf{I}-p\mathbf{A}. And it is easy to see why: inverting this gives I+pA+p 2A 2+\mathbf{I}+p\mathbf{A}+p^2\mathbf{A}^2+\dots which gives the summed weighting over all paths, or in other words the deformed zeta operator.

Now let’s just look at a couple of examples with three vertices.

First, a bunch of cycles round the graph in the same direction: AA edges from aba\rightarrow b; BB edges from bcb\rightarrow c; and CC edges from cac\rightarrow a.

a A b C B c\array{ a&\overset{A}{\rightarrow}&b\\ \underset{C}{\nwarrow}& &\underset{B}{\swarrow}\\&c }

The cycle gives us a term 11p 3ABC{{1}\over{1-p^3ABC}}. For each vertex, we have a term 11 for the path (and its catenations with all cycles), a term in pp for the path to the next vertex (and its catenations with all cycles) and a term in p 2p^2 for the path to the next vertex but one (and its catenations with all cycles).

Thus our zeta operator is

11p 3ABC(1 pA p 2AB p 2BC 1 pB pC p 2CA 1){{1}\over{1-p^3ABC}}\left(\array{1&pA&p^2AB\\p^2BC&1&pB\\pC&p^2CA&1}\right)

The determinant is 11p 3ABC{{1}\over{1-p^3ABC}} and laboriously inverting gives us

(1 pA 0 0 1 pB pC 0 1)\left(\array{1&-pA&0\\0&1&-pB\\-pC&0&1}\right)

This is simply IpA\mathbf{I}-p\mathbf{A} (where the bold A\mathbf{A} is the adjacency matrix as usual).

One final example: in the above, add DD arrows going back from bb to aa. I’m afraid I’m too lazy and timid to work out the zeta operator from scratch: instead I’ll invert IpA\mathbf{I}-p\mathbf{A}. Thus we take

(1 pA 0 pD 1 pB pC 0 1)\left(\array{1&-pA&0\\-pD&1&-pB\\-pC&0&1}\right)

The determinant is 1p 2ADp 3ABC1-p^2AD-p^3ABC. Inverting gives us

11p 2ADp 3ABC(1 pA p 2AB pD+p 2BC 1 pB pC p 2AC 1){{1}\over{1-p^2AD-p^3ABC}}\left(\array{1&pA&p^2AB\\pD+p^2BC&1&pB\\pC&p^2AC&1}\right).

OK, that’s enough of that—I think we can see the pattern clearly enough.

Now let’s take another look at that last determinant.

Notice the clever way that it counts the number of 2- and 3- cycles and assigns the results as coefficients of p 2p^2 and p 3p^3. After calculating a lot of determinants, I’ve come to the conclusion that this is generally true, except that it’s a bit more complicated. What’s really going on is that the determinant is counting allowed permutations of the vertices under the action of the graph edges. An allowed permutation of vertices is constructed thus: for each vertex, pick either the identity, or one of the (directed) edges which has its source at the vertex. Then send that vertex to either itself (if we chose the identity) or to the destination vertex of the edge. Each allowed permutation has a cycle decomposition; we add up the lengths of the cycles in the decomposition (counting the identity as 0 length), and that gives the power of pp to which the permutation contributes.

Obviously, primitive graph cycles which share vertices can’t act simultaneously, but completely disjoint cycles can, so, e.g. if we have two disjoint 3-cycles in the graph, we will get terms (1p 3p 3+p 6)=(1p 3)(1p 3)(1-p^3 - p^3 + p^6)=(1-p^3)(1-p^3), the two p 3p^3 terms arriving when we choose the 3-cycle edges on one cycle and the identities on the other, and the p 6p^6 term resulting from running both 3-cycles simultaneously. In this case, the polynomial will always factorise, simply because the permutation group itself factorises (… I think).

The reciprocal of the determinant is the determinant of the zeta operator and is a sort of zeta function, or at least a zeta functioney sort of thingy, ish. It’s quite similar to the Ihara zeta function, but there are some differences: the Ihara zeta function is applied to undirected graphs, so we need to remember we are always counting cycles in both directions; and the Ihara zeta function has a quadratic term in its argument (my pp) which I think is intended to discount backtracking 2-cycles, and has the interesting effect of (almost) turning it into the reciprocal of the determinant of the Laplacian. (I think this technique for knocking out certain 2-cycles can be adapted to eliminate the effect of longer cycles too—e.g. if we wanted to take into account the homotopic contractibility of some polygonal 2-cell that we had inserted into the graph; but I haven’t investigated that in detail.) There may be other differences, because I haven’t looked at the Ihara zeta function as hard as I would have liked to do. Actually, I’m sadly quite confused about it.

I also wanted to try and use this approach on the Euler characteristic of the automorphism groups of individual vertices. Let’s try a cyclic group of order nn. We start by choosing a graph with one vertex and one (directed) edge. This gives, as its graph category, the free category on one generator, with its deformed zeta operator (which is a number, since we only have one vertex) being equal to 11p{{1}\over{1-p}}. This gives an Euler characteristic of 00 as expected. Now we quotient out by the subcategory generated by the nnth power of the edge, whose zeta operator is presumably 11p n{{1}\over{1-p^n}}. So dividing the first by the second, we get a zeta operator for the cyclic group n\mathbb{Z}_n of 1p n1p=1+p+p 2++p n1{{1-p^n}\over{1-p}}=1+p+p^2+\dots+p^{n-1}. Obviously the inverse of this zeta operator is just the reciprocal of this expression, and clearly as p1p\rightarrow 1 we get χ1n\chi\rightarrow{{1}\over{n}}. So far, so good!

We can extend this to other finite abelian groups easily enough. We take the graph with one vertex and rr edges. Forcing everything to be abelian, the universal covering graph of this is the rr-dimensional lattice over \mathbb{Z} and we get a zeta operator of 1(1p) r{{1}\over{(1-p)^r}}. We can then quotient out each dimension separately, getting (1p n 1)(1p n 2)(1 n r)(1p) r{{(1-p^{n_1})(1-p^{n_2})\dots(1^{n_r})}\over{(1-p)^r}}, ending up with a product of pp-deformed integers [n 1][n 2][n r][n_1][n_2]\dots[n_r] which tends to the order of the group as p1p\rightarrow 1. Hence the Euler characteristic comes out as the reciprocal of the group size. And we get a zeta function for the group into the bargain. (Although I’m tempted to assign a different variable for each generator … but let’s not go there … )

OK so far …

For a non-abelian group GG, although it’s clear in principle what to do, the graph approach doesn’t really make things clearer than the algebraic approach. For a presentation with nn generators, we have a graph with 11 vertex and nn edges. The universal covering graph is an infinite nn-ary tree. The zeta operator will be 11np{{1}\over{1-n p}}. It’s when we try to take a quotient that things get a bit hairy. In principal, it’s obvious what we need to do: we take each element of the group, express it as a product of kk generators, write down a term p kp^k, add them up, take the reciprocal, and get something that tends to 1|G|{{1}\over{\vert G\vert}} as p1p\rightarrow 1. In practice, we might as well just miss out the intermediate steps and take the Baez-Dolan approach from the start. On the other hand, in even more principled principle, I find myself wanting to pick different variables for each generator, stop them from commuting, and in fact actually take them from some representation of the group, and, umm, … do something intelligent with them …

As usual, I’m sure this has all been worked over before now by somebody other than me, somewhere; but I’m learning new stuff.

Posted by: Tim Silverman on March 8, 2007 9:03 PM | Permalink | Reply to this

Re: This Week’s Finds in Mathematical Physics (Week 244)

As usual, I’m sure this has all been worked over before now by somebody other than me, somewhere; but I’m learning new stuff.

That’s very interesting: a regularization procedure for computing ill-defined Leinster measures on non-finite categories! I doubt that too many other people have even tried thinking about this. At least not explicitly.

Who knows, maybe in the end this turns out to reduce to a problem of the kind well-known in quantum field theory. It sure has a similar touch to it.

Posted by: urs on March 12, 2007 3:20 PM | Permalink | Reply to this

A Zeta-like Generating Function for Certain Freely Finitely-Generated 2-Categories

Urs kindly wrote:

I doubt that too many other people have even tried thinking about this. At least not explicitly.

Thanks for this encouraging remark. But I’m still not sure, myself—in this intersection of well-trodden highways like zeta-functions, graph theory, Euler characteristics and q-deformation, can there still be much untrampled ground? For instance, if we look in twf 186 we see that my vague prescription

we take each element of the group, express it as a product of kk generators, write down a term p kp^k, add them up, take the reciprocal, and get something that tends to 1|G|\frac{1}{\vert G\vert} as p1p\rightarrow 1

becomes exact for Coxeter groups. And if we follow up some of the refs there, we quickly run into the growth series of geometric group theory.

However, there are one or two things I have managed to do in the last month or so, which I will talk about here. I think I have finally managed to outdo both Urs for length of post, and Greg Egan for quantity and grunginess of calculations, so I’ve provided some headings to help you to know which bits to skip.

Part 0 in which some unfinished business from the last post is completed, both to clear the way for the arduous ordeal ahead, and to prefigure our ultimate result

Of 1|IuA|\frac{1}{\vert\mathbf{I}-u\mathbf{A}\vert} (with A\mathbf{A} the adjacency matrix of a directed graph, and ||\vert\cdot\vert indicating determinant), I said,

[It] is a sort of zeta function, or at least a zeta functioney sort of thingy, ish.

I can now do a bit better than that. We get the following argument, which is basically a drastic simplification of the one in Ihara zeta functions for periodic simple graphs mentioned by Eric Forgy. (This in turn seems to be a slight generalisation of the one in the Terras/Stark graph zeta-function review paper in Advances in math., 121 (1996).)

We use \mathfrak{R} for the set of all cycles (this stands for reduced cycles in the papers above but, in our context, all cycles are reduced). By a cycle, we mean an equivalence class of paths whose final vertex is equal to their initial vertex, modulo changing the choice of path start/end point. We use 𝔓\mathfrak{P} for the prime cycles, i.e. the ones that go round once.

For a cycle CC, we denote its length by |C|\vert C\vert. We are going to use this length as the norm of the path. We also define ν(C)\nu(C) as follows: if CC is a prime cycle, then ν(C)\nu(C) is its length |C|\vert C\vert; if CC is a power of a prime cycle, then ν(C)\nu(C) is the length of that prime cycle.

We are going to count cycles of length mm by taking the trace of A m,\mathbf{A}^m, which will count 11 for each point on each cycle, i.e. a total of ν(C)\nu(C) for the whole cycle. From these counts, we get a generating function, a “zeta” function.

We define our directed graph zeta function by

Z(u)=1 C𝔓(1u |C|).Z(u) = \frac{1}{\prod_{C\in\mathfrak{P}}\left(1-u^{\vert C\vert}\right)}.

Now we have

m=1 Tr(A m)u m= Cν(C)u |C|\sum_{m=1}^{\infty} \mathop{Tr}(\mathbf{A}^m)u^m = \sum_{C\in\mathfrak{R}}\nu(C) u^{\vert C\vert}

= m=1 C𝔓|C|u |C m|=\sum_{m=1}^\infty \sum_{C\in\mathfrak{P}}\vert C\vert u^{\vert C^m\vert}

= C𝔓 m=1 |C|u m|C|=\sum_{C\in\mathfrak{P}}\sum_{m=1}^\infty \vert C\vert u^{m\vert C\vert}

= C𝔓uddu m=1 u m|C|m=\sum_{C\in\mathfrak{P}}u \frac{d}{du}\sum_{m=1}^\infty \frac{u^{m\vert C\vert}}{m}

= C𝔓udduln(1u |C|)=-\sum_{C\in\mathfrak{P}}u\frac{d}{du}ln(1-u^{\vert C\vert})

=uddu C𝔓ln(1u |C|)=-u\frac{d}{du}\sum_{C\in\mathfrak{P}}ln(1-u^{\vert C\vert})

=uddulnZ(u)=u\frac{d}{du}ln Z(u)

But meanwhile, back at the ranch,

m=1 Tr(A m)u m=Tr( m=1 A mu m)\sum_{m=1}^{\infty} \mathop{Tr}(\mathbf{A}^m)u^m = \mathop{Tr}\left(\sum_{m=1}^\infty \mathbf{A}^m u^m\right)

=Tr(Au m=0 A mu m)=\mathop{Tr}\left(\mathbf{A}u \sum_{m=0}^\infty \mathbf{A}^m u^m\right)

=TrAuIAu=\mathop{Tr}\frac{\mathbf{A}u}{\mathbf{I}-\mathbf{A}u}

=Tr(udduln(IAu))=-\mathop{Tr}\left(u\frac{d}{du}ln(\mathbf{I}-\mathbf{A}u)\right)

=udduTr(ln(IAu))=- u\frac{d}{du}\mathop{Tr}\left(ln(\mathbf{I}-\mathbf{A}u)\right)

So

uddulnZ(u)=udduTr(ln(IAu))u\frac{d}{du}ln Z(u) = - u\frac{d}{du}\mathop{Tr}\left(ln(\mathbf{I}-\mathbf{A}u)\right)

Hence

Z(u)=exp(Trln(IAu))Z(u) = exp(-\mathop{Tr} ln(\mathbf{I}-\mathbf{A}u))

=1|IAu|=\frac{1}{\vert\mathbf{I}-\mathbf{A}u\vert}

where in the last line the vertical bars indicate determinant.

So the determinant of the q-deformed zeta operator gives a real zeta function.

Well, real-ish. It has the right Euler product expansion, and when I first did this I thought it displayed so clearly the pure and pristine essence of zetaness that I was kind of annoyed it wasn’t mentioned more. But, on further examination, while I think I can get a functional equation out of it, I can’t for the life of me see any reason why it should obey a Riemann hypothesis. So maybe it’s not as zeta-ish as all that.

Anyway, having got this directed graph zeta function, we want to compare it to the Ihara zeta function (of an undirected graph) and see where the difference arises—in particular, the complicating quadratic term in the latter. (Which is defined in terms of an Euler product, but gives Z(u)=(1u 2) χ|IAu+Qu 2| 1Z(u) = (1-u^2)^\chi\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2\vert^{-1}, where χ\chi is the graph’s Euler characteristic, A\mathbf{A} is the adjacency matrix and Q\mathbf{Q} is the diagonal matrix of vertex degrees minus one (diag(v 1)1,diag(v 2)1,)(diag(v_1)-1, diag(v_2)-1, \dots).)

An undirected graph is of course a kind of directed graph, but the path-counting that underlies the Ihara zeta function ignores paths that include backtracking (go along an edge and then immediately come back along the same edge) and tails (first step of a cyclic path goes along an edge, last step comes back along the same edge). If we take going in opposite directions along the same edge to be weak inverses or adjoints of each other, in a 2-category of paths, then both these exclusions remove bits of path homotopic (or 2-morphic) to null paths. That is, we only count homotopy classes of paths, assigning them the norm of their shortest member.

Part 1 in which the author lulls you into a false sense of security with some general chit-chat and easy definitions, and then states the main result

No sooner said than generalised. What if we take an arbitrary directed graph, and then declare that certain cycles are homotopically “collapsible”, and that we are only going to count paths that do not include any such cycle? I’ve tried to adapt the Terras/Stark argument to cover this generalisation. We’ll assume initially that all the collapsible cycles pass through each of their member vertices only once, and that no two collapsible cycles share any edges. We’ll attempt to generalise this later on.

We’ll use the notation Q k (1)\mathbf{Q}_k^{(1)} to indicate the matrix generating the collapsible cycles of length kk (i.e. it’s the part of the adjacency matrix that takes you one step around each collapsible cycle of length kk). If we don’t allow cycles to share edges, then this is straightforwardly the sum of the matrices which respectively generate each individual cycle. We use Q k (n)\mathbf{Q}_k^{(n)} to indicate the matrix of nnth powers of these cycles (i.e. the matrix that takes you nn steps around each collapsible cycle of length kk). In particular, Q k (k)\mathbf{Q}_k^{(k)} is diagonal, and Q k (n+k)=Q k (n)\mathbf{Q}_k^{(n+k)}=\mathbf{Q}_k^{(n)}. In this notation, an undirected graph is the special case with Q 2 (1)=A\mathbf{Q}_2^{(1)} = \mathbf{A} (since every edge gives rise to a 2-cycle) and Q 2 (2)=Q+I\mathbf{Q}_2^{(2)} = \mathbf{Q}+\mathbf{I} (using Q\mathbf{Q} as in the formula for the Ihara zeta function given above, assigning degree-1 to each vertex).

At this point, we have defined enough notation to be able to present our main result, viz. a closed form formula for a sort of zeta function for this generalised case, which we are about to argue for at length:

Z(u) 1=|Ξ(u)| k=1 (1u k) k1kTr(Q k (0))Z(u)^{-1}=\vert\Xi(u)\vert \left.\product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr\left(\mathbf{Q}_k^{(0)}\right)}\right.

where

Ξ(u)=IAu+ k=1 u k1u k(Q k (0)uQ k (1))\Xi(u)=\mathbf{I}-\mathbf{A}u+\left.\sum_{k=1}^\infty\frac{u^k}{1-u^k}(\mathbf{Q}_k^{(0)}-u\mathbf{Q}_k^{(1)})\right.

(From cyclicity, Q k (0)\mathbf{Q}_k^{(0)} is equal to Q k (k)\mathbf{Q}_k^{(k)}.)

We’ll discuss this further in the final section.

Now we get down to work.

Part 2 in which we warm up with a discussion of path-counting and some calculations

We argue as follows, starting with the elimination of collapsible cycles of a single length kk and then generalising to cycles of various lengths.

Let A m\mathbf{A}_m denote the matrix of numbers of collapsed paths of length mm, where the ijij-th element of the matrix counts the number of collapsed paths from vertex ii to vertex jj, and by a “collapsed” path we mean one with no collapsible cycles in it.

For m<km&lt;k we simply have A m=A m=A m1A\mathbf{A}_m=\mathbf{A}^m=\mathbf{A}_{m-1}\mathbf{A}. For m=km=k we just need to eliminate the newly created complete collapsible cycles of length kk, so A m=A m1AQ k (k)\mathbf{A}_m=\mathbf{A}_{m-1}\mathbf{A}-\mathbf{Q}_k^{(k)} or rather (to anticipate the next steps) A m=A m1AA 0Q k (k)\mathbf{A}_m=\mathbf{A}_{m-1}\mathbf{A}-\mathbf{A}_0\mathbf{Q}_k^{(k)} where A 0=I\mathbf{A}_0=\mathbf{I}.

After this, we need to start thinking …

At m=k+1m=k+1 we have to take off any new cycles that were added by the k+1k+1st step, giving A m1AA 1Q k (k)\mathbf{A}_{m-1}\mathbf{A}-\mathbf{A}_1\mathbf{Q}_k^{(k)}. However, at this point an inclusion-exclusion principle steps in, because A 1Q k (k)\mathbf{A}_1\mathbf{Q}_k^{(k)} will include paths whose first step (produced by A 1\mathbf{A}_1) is already one of the steps of one of the collapsible cycles of length kk. That cycle would then already be completed by the first k1k-1 steps implicit in Q k (k)\mathbf{Q}_k^{(k)}. So A 1Q k (k)\mathbf{A}_1\mathbf{Q}_k^{(k)} will include the paths which take us k+1k+1 steps around a kk-cycle. But since they include a kk-cycle, these paths will have already been excluded from those counted in A m1\mathbf{A}_{m-1}, and do not need to be excluded all over again. So we need to put them back in:

A m=A m1AA 1Q k (k)+A 0Q k (k+1)\mathbf{A}_m=\mathbf{A}_{m-1}\mathbf{A}-\mathbf{A}_1\mathbf{Q}_k^{(k)}+\mathbf{A}_0\mathbf{Q}_k^{(k+1)}

We might think that at m=k+2m=k+2 we need to add a third term to subtract A 0Q k (k+2)\mathbf{A}_0\mathbf{Q}_k^{(k+2)} to continue the inclusion-exclusion, but that is not the case. The point is that A mkQ k (k)\mathbf{A}_{m-k}\mathbf{Q}_k^{(k)} includes the completion of all partial k-cycles in A mk\mathbf{A}_{m-k} with up to k1k-1 steps, while A m(k+1)Q k (k+1)\mathbf{A}_{m-(k+1)}\mathbf{Q}_k^{(k+1)} removes all of those except the first. The first is, of course, the newly created cycle that we really do want to remove. (Newly created, that is, by our step from A m1\mathbf{A}_{m-1} to A m\mathbf{A}_m.)

For instance, if we are thinking about collapsible 5-cycles, then, generally, for mm large enough, A m5\mathbf{A}_{m-5} will include steps that go 0, 1, 2, 3, or 4 steps around a 5-cycle, as, indeed, will A m6\mathbf{A}_{m-6}. Hence A m5Q 5 (5)\mathbf{A}_{m-5}\mathbf{Q}_5^{(5)} will include paths that overrun an entire 5-cycle by an additional 0, 1, 2, 3 or 4 steps, while A m6Q 5 (6)\mathbf{A}_{m-6}\mathbf{Q}_5^{(6)} will remove paths that overrun by 1, 2, 3, 4, or 5 steps. (Of course, if mm is not large enough, then one or more of the longer overruns will not happen, but that affects both terms in the same way.) So the net effect of (A m5Q 5 (5)A m6Q 5 (6))-(\mathbf{A}_{m-5}\mathbf{Q}_5^{(5)}-\mathbf{A}_{m-6}\mathbf{Q}_5^{(6)}) is to remove the entire cycle (overrun of 0), to remove but then put back overruns of 1, 2, 3, or 4 steps, and to insert (if mm is large enough) a gratuitous entire cycle (overrun of 5 steps), which needs to be removed by a second round of inclusion-exclusion, starting at A m2kQ k (2k)\mathbf{A}_{m-2k}\mathbf{Q}_k^{(2k)}. And similarly for cycles of other lengths. So we end up with:

A m=A m1A k=1 m n=1 mk(A mnkQ k (nk)A m(nk+1)Q k (nk+1))\mathbf{A}_m=\mathbf{A}_{m-1}\mathbf{A}-\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)})

(For this formula to make sense, we need to introduce A 1=0\mathbf{A}_{-1}=\mathbf{0}, which is a little annoying to have to specify, but turns out quite convenient later.)

Note that the situation changes a bit when we get towards the end of the series, with terms A mnk\mathbf{A}_{m-n k} with mnk<km-n k&lt;k, although fortunately the end result is the same. The reason is that, with these terms, the paths represented by A mnk\mathbf{A}_{m-n k} are short enough to fit entirely within the collapsible kk-cycle. With paths that are too long to fit in the collapsible cycle, the possibilities are constrained by the fact that some initial segment of each path must lie outside the cycle. For instance, if there is only one vertex in the cycle with an incoming edge that is not already part of the cycle, then any long path must enter the cycle through that vertex. But once we get down to paths that are short enough to fit entirely inside the cycle, A mnk\mathbf{A}_{m-n k} will include some paths which are freed from all constraint by anything outside the cycle; we’ll get kk such paths, one starting at each vertex of the cycle, all lying entirely inside the cycle.

Fortunately, this contribution to A m\mathbf{A}_m coming from A mnk\mathbf{A}_{m-n k} will always be cancelled by an exactly equal number of paths (i.e. kk of them), coming from another term. This cancellation will usually come from the term in A m(nk+1)\mathbf{A}_{m-(n k+1)}. The only case where this latter is missing is the one where m=nkm=n k so we have a term in A 0\mathbf{A}_0, where the contribution of A 1\mathbf{A}_{-1} doesn’t do anything. But in this case the short paths contributed by A 0\mathbf{A}_0 will be cancelled by the extra paths coming from the term in A k1\mathbf{A}_{k-1} (there won’t be any extra paths coming from the latter term’s partner, viz. the term in A k\mathbf{A}_k, since this doesn’t include any terms fitting (snugly) in the kk-cycle—they are precisely removed by the requirement that all A m\mathbf{A}_m, including A k\mathbf{A}_k, consist of non-collapsible paths!)

I’ve emphasised this point not only for the sake of completeness, but also because in a moment, when we come to deal with tailed paths, things will not work out quite so simply, and we will need to make special provision for the combinatorics of the short paths that can fit into a collapsible kk-cycle.

Part 3 in which we think long and hard about the combinatorics of paths, cyclic and non-cyclic, collapsible and otherwise, and you—if you value your sanity—draw yourself lots of illustrative diagrams to accompany the text

So now we come to those tailed paths. This is where things get very tricky indeed, so hold onto your hats.

What we want is cyclic paths which start somewhere on a collapsible kk-cycle, go part way round, reaching some vertex vv, leave the collapsible cycle and go around a non-collapsible cycle, returning to vv, and then go around the rest of the collapsible cycle to end up where they started. These are the paths that are not excluded from the A m\mathbf{A}_m (they do not include a whole collapsible cycle anywhere along their length) but which, when joined up at the ends to form a complete cycle, turn out to contain a collapsible cycle lurking within them, lying across the start/end point. We need to get rid of them.

We’ll start our quest of destruction with the matrix A mQ k (k)\mathbf{A}_m\mathbf{Q}_k^{(k)}. We’ll be taking the trace of this, in order to pick out the cycles. The sort of path that we ideally want this matrix to pick out is the sort that starts at some point vv lying on the intersection of a collapsible kk-cycle (represented by Q k (k)\mathbf{Q}_k^{(k)}—let’s call this cycle Q) and a non-collapsible mm-cycle (represented by A m\mathbf{A}_m—let’s call this cycle A), with the two cycles Q and A sharing no edges. The path will first go around the non-collapsible cycle A, then around the collapsible one Q. Of course, the resulting path will contain an overt collapsible cycle between start and end point (viz. Q!), and so not be tailed, but the point is this: instead of this path, we can take another path representing the same cycle, but starting at any one of the k1k-1 points other than vv on the kk-cycle Q. Each of these gives rise to a tailed path. So we take Tr((k1)A mQ k (k))Tr((k-1)\mathbf{A}_m\mathbf{Q}_k^{(k)}), and all is well. We hope.

Alas, no. There are other, nastier paths which throw a spanner into our beautiful works. The problem is that the non-collapsible paths represented by A m\mathbf{A}_m can include paths which share one or more edges with the collapsible kk-cycle Q, at their beginning, or their end, or both. These cause a problem which has to be dealt with.

For instance, suppose A and Q share an edge which starts at vertex v 1v_1 and ends at vertex v 2v_2. So A starts off at v 1v_1 (which also lies on Q), moves along the shared edge of Q and A to v 2v_2, and only then embarks in its journey of m1m-1 steps around A, bringing it back to v 1v_1. We then append the collapsible kk-cycle Q, bringing us back to v 1v_1 again. Having constructed this collapsible cyclic path, we try to construct tailed paths by joining its ends to form a cycle and then splitting the cycle at the k1k-1 points of Q other than v 1v_1, as per our recipe.

Well, k2k-2 of these splitting points will be fine; if we split there, we will get a path that starts at the splitting point, takes as many steps around Q as are needed to get to to v 1v_1 (where the non-collapsible path A starts), take the first step along that path (via the shared edge to v 2v_2), and continue around along A back to v 1v_1, before finally returning to the splitting point along the remainder of Q. However, there is one splitting point which will not give a nice tailed path, and that point is v 2v_2. The path starting at v 2v_2 will take the k1k-1 steps along Q to v 1v_1, take the first step around A to v 2v_2 and … aargh! … it will at that point have completed a kcyclek-cycle and rendered itself a collapsible path, already excluded and not a candidate for our tail-removal scheme.

What is worse is that this cycle is represented twice in A mQ k (k)\mathbf{A}_m\mathbf{Q}_k^{(k)}—once by the path starting at v 1v_1 (which we have just considered), and once by the path starting at v 2v_2, which will, incidentally, exhibit exactly the same problem when we try to split it at v 1v_1.

So what has happened is that we get 2(k1)2(k-1) candidate tailed paths, when in reality there are only k2k-2.

The way to get rid of these is with A m1Q k (k+1)\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}. This will include the sort of path that starts at v 2v_2, goes m1m-1 steps around A to v 1v_1, then completes its journey in k+1k+1 steps, first to v 2v_2, then all the way around Q and back to the starting point of v 2v_2 (again). This gives exactly the same cycle as the troublesome one considered above, so is good for cancelling excess versions of it. What is more, this sort of path can’t start at v 1v_1 (it can’t get back to v 1v_1 in m1m-1 steps), so will count this sort of cycle only once, not twice. So to get the correct count, what we need to do is subtract kTr(A m1Q k (k+1))k Tr(\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}). Then these troublesome paths will be counted 2(k1)k=k22(k-1)-k=k-2 times, as we want.

Of course, we’re not out of the woods yet: what if A shares two edges with Q (either both at the start, or both at the end, or one at the start and one at the end)? There will now be only k3k-3 legitimate places to split. There are three vertices shared between A and Q, namely v 1v_1 at the start of the first shared edge, v 2v_2 between the first and second shared edges, and v 3v_3 at the end of the second shared edge. None of these are candidates for splitting to get a tail, for exactly the same reason as with the two shared vertices in the case of one shared edge (or, for that matter, the one shared vertex in the case of no shared edges): we end up with a complete circuit of the collapsible cycle Q, either at the start or at the end of our cyclic path. Furthermore, this sort of cycle will be counted three times—once for each path starting at a shared vertex. So we will count 3(k1)3(k-1) tailed paths when we only want k3k-3.

By an amazing stroke of luck, kA m1Q k (k+1)k\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)} will sort this problem out for us too. This term counts this sort of cycle twice: once for the path starting at vertex v 2v_2 (which gets us to v 1v_1 in m1m-1 steps) and once for the path starting at v 3v_3 (which gets us to v 2v_2 in m1m-1 steps). So we total 3(k1)2k=k33(k-1)-2k=k-3. Indeed, this works for any cycle having a number of shared edges pp with p<kp&lt;k: we count (p+1)(k1)(p+1)(k-1) paths with (k1)A mQ k (k)(k-1)\mathbf{A}_m\mathbf{Q}_k^{(k)} and subtract pkp k paths with kA m1Q k (k+1)k\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}, leaving k(p+1)k-(p+1), which is what we want. Hoorah!

At the limit of k1k-1 shared edges, we end up counting the same cycle (p+1)=k(p+1)=k times over, but we want to end up with k(p+1)=0k-(p+1)=0 valid tails. From the (k1)k(k-1)k contributed by (k1)A mQ k (k)(k-1)\mathbf{A}_m\mathbf{Q}_k^{(k)} we subtract the kp=k(k1)k p=k(k-1) contributed by kA m1Q k (k+1)k\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}, giving 00, just as we want.

But we are not done yet. What happens when we have kk shared edges? Obviously these cannot all be at the start or all be at the end of the path given by A m\mathbf{A}_m, for then they would appear as a complete circuit of Q, so what we have is that the path A is itself tailed—that’s before we add on another tail in the form of Q k (k)\mathbf{Q}_k^{(k)}. In this situation, two things happen.

First, there are no valid tailed paths at all that can be obtained from this path by appending a circuit of Q and splitting. But as we’ve just mentioned, this in fact has happened already in the previous case, with p=k1p=k-1, since up to that point we’ve consistently ended up with k(p+1)k-(p+1) valid splitting points. Previously, we decremented the number of valid tailed paths by 11 for each extra shared edge. So that pattern has been broken (we don’t have 1-1 split points!).

Second, we only count the same cycle with k1k-1 different paths. We have now reached the point where A is (as a cycle) a complete circuit of the collapsible kk-cycle Q appended to a circuit of a non-collapsible cycle (of mkm-k steps). This shorter non-collapsible cycle must start and end at the same vertex vQv\in\mathit{Q}. And this vertex is of course not available as a starting point for A, because starting there would make A an invalid, collapsible path, rather than just a tailed path. So the cycle A gets counted only k1k-1 times, for each of the points in Q other than vv. (This hopefully sounds familiar—maybe confusingly familiar. Remember that, although the logic of path-counting is the same as it was way above, at the start of the whole tail-counting business, we are now dealing with A itself, not the catenation of A and Q.)

So, from (k1)A mQ k (k)(k-1)\mathbf{A}_m\mathbf{Q}_k^{(k)}, we are counting (k1)(k1)(k-1)(k-1) tailed paths, when we want 00.

What about our cancelling term kA m1Q k (k+1)k\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}? At this point, A m1\mathbf{A}_{m-1} only has k1k-1 shared edges. These can’t wrap around Q, so with this term, everything carries on as normal and is fine—in the sense that we follow the pattern and count this cycle kk times over. So we now have a contribution from this term of k 2k^2. The net is thus (k1) 2k 2=(2k1)(k-1)^2-k^2=-(2k-1). So we have overshot, and subtracted too much!

Fortunately, it is at precisely this point that A mkQ k (2k)\mathbf{A}_{m-k}\mathbf{Q}_k^{(2k)} starts to contribute paths—one path, to be precise. This is because, of course, combining the shared kk edges from A mA_m and the circuit of Q from Q k (k)\mathbf{Q}_k^{(k)}, we have exactly two circuits of Q in total.

So we can cancel the overshoot by adding on (2k1)A mkQ k (2k)(2k-1)\mathbf{A}_{m-k}\mathbf{Q}_k^{(2k)}.

Let’s keep going: we’re getting towards the end of this hard bit. Suppose that A shares k+1k+1 edges with Q? It’s super-tailed! Once again, even more than in the previous case, the paths cannot all be at the end or all at the beginning. In fact, only k2k-2 paths get counted, since now one of the edges is shared twice over, and none of the paths can begin at either end of this edge. Meanwhile, A m1Q k (k+1)\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)} has been hit by the same problem of wrapping around Q, and forbidden starting points, and only counts k1k-1 paths. So we have (k1)(k2)k(k1)=(2k2)(k-1)(k-2)-k(k-1)=-(2k-2) when we want 00. We also have a contribution of 2(2k1)2(2k-1) from the extra compensating term (2k1)A mkQ k (2k)(2k-1)\mathbf{A}_{m-k}\mathbf{Q}_k^{(2k)}, giving a total of 2k2k. But at this point, this same path appears for the first time in A m(k+1)Q k (2k+1)\mathbf{A}_{m-(k+1)}\mathbf{Q}_k^{(2k+1)}. So we just add a term 2kA m(k+1)Q k (2k+1)-2k\mathbf{A}_{m-(k+1)}\mathbf{Q}_k^{(2k+1)}, and that will sort matters out for us.

It may appear at this stage that things are getting increasingly desperate, but in fact they are about to resolve themselves. As we get increasing overlaps, both A mQ k (k)\mathbf{A}_m\mathbf{Q}_k^{(k)} and A m1Q k (k+1)\mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)} have a gracefully declining path count per cycle. The former has a count of 2k(p+1)2k-(p+1) for cycles with an overlap of pp edges, while the latter has a count of 2kp2k-p for those same cycles. Thus they net (k1)(2k(p+1))k(2kp)=3k+(p+1)(k-1)(2k-(p+1))-k(2k-p)=-3k+(p+1). Meanwhile A mkQ k (2k)\mathbf{A}_{m-k}\mathbf{Q}_k^{(2k)} and A m(k+1)Q k (2k+1)\mathbf{A}_{m-(k+1)}\mathbf{Q}_k^{(2k+1)} have a gracefully increasing count of (pk+1)(p-k+1) and (pk)(p-k) for those cycles, giving a contribution of (2k1)(pk+1)2k(pk)=3k(p+1)(2k-1)(p-k+1)-2k(p-k)=3k-(p+1). Thus the total of the four terms together cancels to give 00 as desired, since none of these very overlapping paths give any valid tails.

This happy situation continues for a while. What happens when A shares k1k-1 edges with Q at both ends, giving p=2(k1)p=2(k-1)? At this point, the first term, (k1)A mQ k (k)(k-1)\mathbf{A}_m\mathbf{Q}_k^{(k)} is contributing just 2k(p+1)=12k-(p+1)=1 path, the second term contributes 22 paths, the third contributes pk+1=k1p-k+1=k-1, and the fourth k2k-2. So we have (k1)2k+(k1)(2k1)(k2)2k=0(k-1)-2k+(k-1)(2k-1)-(k-2)2k=0. The cancellation still works. At the next stage (p=2k1p=2k-1), the first term drops out altogether, the second contributes one path, the third reaches its high point of kk paths and the fourth contributes k1k-1. Checking, we see that we have k+k(2k1)(k1)2k=0-k+k(2k-1)-(k-1)2k=0. So the cancellation works here too!

At the next step (p=2kp=2k), we reprise an earlier situation. The second term stops contributing paths and drops out. The third term goes into decline and contributes only k1k-1 paths, and the fourth term reaches its peak at kk paths. At the same time, we get the first contribution from A m2kQ k (3k)\mathbf{A}_{m-2k}\mathbf{Q}_k^{(3k)}. The third and fourth terms contribute (2k1)(k1)2k 2=3k+1(2k-1)(k-1)-2k^2=-3k+1, so the new fifth term should clearly be (3k1)A m2kQ k 3k(3k-1)\mathbf{A}_{m-2k}\mathbf{Q}_k^{3k} in order to cancel it. At the next step (p=2k+1p=2k+1) we need a sixth term 3kA m(2k+1)Q k (3k+1)-3k\mathbf{A}_{m-(2k+1)}\mathbf{Q}_k^{(3k+1)}.

Let’s tabulate this for k=4k=4:

(In the table below, “multiplier” is the thing we multiply the “term” by, “overlap” is the number of shared edges, and “count” is the number of paths per cycle.)

overlap 0 1 2 3 4 5 6 7 8 9 10 11 term multiplier count A mQ k (k) k1 1 2 3 4 3 2 1 A m1Q k (k+1) k 1 2 3 4 3 2 1 A mkQ k (2k) 2k1 1 2 3 4 3 2 1 A m(k+1)Q k (2k+1) 2k 1 2 3 4 3 2 1 A m2kQ k (3k) 3k1 1 2 3 4 A m(2k+1)Q k (3k+1) 3k 1 2 3 \array{ & & overlap\\ & &0&1&2&3&4&5&6&7&8&9&10&11\\ term&multiplier&count\\ \mathbf{A}_m\mathbf{Q}_k^{(k)}&k-1&1&2&3&4&3&2&1\\ \mathbf{A}_{m-1}\mathbf{Q}_k^{(k+1)}&k&&1&2&3&4&3&2&1\\ \mathbf{A}_{m-k}\mathbf{Q}_k^{(2k)}&2k-1&&&&&1&2&3&4&3&2&1\\ \mathbf{A}_{m-(k+1)}\mathbf{Q}_k^{(2k+1)}&2k&&&&&&1&2&3&4&3&2&1\\ \mathbf{A}_{m-2k}\mathbf{Q}_k^{(3k)}&3k-1&&&&&&&&&1&2&3&4\\ \mathbf{A}_{m-(2k+1)}\mathbf{Q}_k^{(3k+1)}&3k&&&&&&&&&&1&2&3\\ }

It is now reasonably clear that this pattern will continue. (Generically we have (nk1)(2k(p+(n1)k+1))nk(2k(p(n1)k))=(2n+1)k+1+p(n k-1)(2k-(p+(n-1)k+1))-n k(2k-(p-(n-1)k))=-(2n+1)k+1+p and ((n+1)k1)(pnk+1)(n+1)k(pnk)=(2n+1)k1p((n+1)k-1)(p-n k+1)-(n+1)k(p-n k)=(2n+1)k-1-p.) For the number of tailed paths (we’ve replaced A mA_m by A mkA_{m-k}, so that we can count t mt_m rather than t m+kt_{m+k}), we end up with:

t m=Tr( k=1 m n=1 mk((nk1)A mnkQ k (nk)nkA m(nk+1)Q k (nk+1))t_m=\mathop{Tr}(\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}((n k-1)\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-n k\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)})

Well … not quite …

There are those complications towards the end, when mnk<km-n k&lt;k, with A mnk\mathbf{A}_{m-n k} contributing those pesky short paths that fit entirely inside our collapsible kk-cycle Q. There are always the same number of these—viz. kk—and this time, these don’t cancel properly, because each of the two terms in which they appear has a different multiplier. In most cases, these terms are of the form (nk1)A mnkQ k (nk)nkA m(nk+1)Q k (nk+1)(n k-1)\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-n k\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)}, so there is a deficit of 11 short path (that is, (nk1)nk(n k-1)-n k), which can be easily cured by adding on an extra term A 0Q k (m)\mathbf{A}_0\mathbf{Q}_k^{(m)} (with multiplier 11); but if we have k|mk\vert m then the last two terms in the sum are actually (mk)A k1Q k (m(k1))+(m1)A 0Q k (m)-(m-k)\mathbf{A}_{k-1}\mathbf{Q}_k^{(m-(k-1))}+(m-1)\mathbf{A}_0\mathbf{Q}_k^{(m)}, with an excess of ((mk)+(m1)=k1(-(m-k)+(m-1)=k-1—so we need to subtract (k1)A 0Q k (m)(k-1)\mathbf{A}_0\mathbf{Q}_k^{(m)}, which we will do by subtracting kk and adding 11.

So what we really end up with is:

t m=Tr( k=1 m n=1 mk((nk1)A mnkQ k (nk)nkA m(nk+1)Q k (nk+1))+A 0Q k (m)ψ(k,m)kA 0Q k (m))t_m=\mathop{Tr}(\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}((n k-1)\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-n k\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)})+\mathbf{A}_0\mathbf{Q}_k^{(m)}-\psi(k,m)k\mathbf{A}_0\mathbf{Q}_k^{(m)})

where the symbol ψ(k,m)\psi(k, m) is one I’ve just invented which is 11 if k|mk\vert m and 00 otherwise.

So the number of tailless, noncollapsible cyclic paths of length mm is

N m=Tr(A m)t mN_m=\mathop{Tr}(\mathbf{A}_m)-t_m =Tr(A m1A)Tr( k=1 m n=1 mk(nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k (nk+1))A 0Q k (m)+ψ(k,m)kA 0Q k (m))=\mathop{Tr}(\mathbf{A}_{m-1}\mathbf{A})-\mathop{Tr}(\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-( n k+1)\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)})-\mathbf{A}_0\mathbf{Q}_k^{(m)}+\psi(k,m)k\mathbf{A}_0\mathbf{Q}_k^{(m)})

Well, that’s the basic counting done. Onward and upward!

Part 4 in which we do some big sums, and get some surprisingly small results

At this point we are in a position to create two generating functions, on our way to building zeta functions and extracting a few of their properties. Our generating functions are for non-collapsible paths:

m=0 A mu m\sum_{m=0}^\infty\mathbf{A}_m u^m

and for non-collapsible tailless cyclic paths:

m=1 N mu m\sum_{m=1}^\infty N_m u^m

We are going to obtain some nice closed-form expressions for these generating functions. We’ll do it by multiplying them by the following expression:

Ξ(u)=IuA+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1)\Xi(u)=\mathbf{I}-u\mathbf{A}+\sum_{k=1}^\infty\sum_{n=1}^\infty(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1})

OK, here goes:

( m=0 A mu m)Ξ(u)(\sum_{m=0}^\infty \mathbf{A}_m u^m)\Xi(u)

=( m=0 A mu m)(IuA+ k=1 m=1 (Q k (nk)u nkQ k (nk+1)u nk+1))=(\sum_{m=0}^\infty \mathbf{A}_m u^m)(\mathbf{I}-u\mathbf{A}+\sum_{k=1}^\infty\sum_{m=1}^\infty(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1}))

= m=0 A mu m( m=0 A mAu m+1 m=0 k=1 n=1 A mQ k (nk)u m+nk+ m=0 k=1 n=1 A mQ k (nk+1)u m+nk+1)=\sum_{m=0}^\infty\mathbf{A}_m u^m-(\sum_{m=0}^\infty\mathbf{A}_m\mathbf{A}u^{m+1}- \sum_{m=0}^\infty\sum_{k=1}^\infty\sum_{n=1}^\infty\mathbf{A}_m\mathbf{Q}_k^{(n k)}u^{m+n k}+\sum_{m=0}^\infty\sum_{k=1}^\infty\sum_{n=1}^\infty\mathbf{A}_m \mathbf{Q}_k^{(n k+1)}u^{m+n k+1})

= m=0 A mu m( m=0 A mAu m+1 k=1 n=1 m=0 A mQ k (nk)u m+nk+ k=1 n=1 m=0 A mQ k (nk+1)u m+nk+1)=\sum_{m=0}^\infty\mathbf{A}_m u^m-(\sum_{m=0}^\infty\mathbf{A}_m\mathbf{A}u^{m+1}- \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m=0}^\infty \mathbf{A}_m\mathbf{Q}_k^{(n k)}u^{m+n k}+ \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m=0}^\infty \mathbf{A}_m \mathbf{Q}_k^{(n k+1)}u^{m+n k+1})

Substituting m=m+1m\prime=m+1 in the second sum, m=m+nkm\prime=m+n k in the first triple sum and m=m+nk+1m\prime=m+n k+1 in the second triple sum, we get

m=0 A mu m( m=1 A m1Au m k=1 n=1 m=nk A mnkQ k (nk)u m+ k=1 n=1 m=nk+1 A m(nk+1)Q k (nk+1)u m)\sum_{m=0}^\infty\mathbf{A}_m u^m-(\sum_{m\prime=1}^\infty\mathbf{A}_{m\prime-1}\mathbf{A}u^{m\prime}- \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m\prime=n k}^\infty \mathbf{A}_{m\prime-n k}\mathbf{Q}_k^{(n k)}u^{m\prime}+ \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m\prime=n k+1}^\infty \mathbf{A}_{m\prime-(n k+1)} \mathbf{Q}_k^{(n k+1)}u^{m\prime})

= m=0 A mu m( m=1 A m1Au m k=1 m=k u m n=1 mkA mnkQ k (nk)+ k=1 m=k u m n=1 mkA m(nk+1)Q k (nk+1))=\sum_{m=0}^\infty\mathbf{A}_m u^m-(\sum_{m\prime=1}^\infty\mathbf{A}_{m\prime-1}\mathbf{A}u^{m\prime}- \sum_{k=1}^\infty \sum_{m\prime=k}^\infty u^{m\prime} \sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor}\mathbf{A}_{m\prime-n k}\mathbf{Q}_k^{(n k)} + \sum_{k=1}^\infty\sum_{m\prime=k}^\infty u^{m\prime}\sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor} \mathbf{A}_{m\prime-(n k+1)} \mathbf{Q}_k^{(n k+1)})

= m=0 A mu m( m=1 A m1Au m m=1 u m k=1 m n=1 mkA mnkQ k (nk)+ m=1 u m k=1 m n=1 mkA m(nk+1)Q k (nk+1))=\sum_{m=0}^\infty\mathbf{A}_m u^m-(\sum_{m\prime=1}^\infty\mathbf{A}_{m\prime-1}\mathbf{A}u^{m\prime}- \sum_{m\prime=1}^\infty u^{m\prime} \sum_{k=1}^{m\prime} \sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor}\mathbf{A}_{m\prime-n k}\mathbf{Q}_k^{(n k)} + \sum_{m\prime=1}^\infty u^{m\prime}\sum_{k=1}^{m\prime} \sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor} \mathbf{A}_{m\prime-(n k+1)} \mathbf{Q}_k^{(n k+1)})

= m=0 A mu m m=1 u m(A m1A k=1 m n=1 mkA mnkQ k (nk)+ k=1 m n=1 mkA m(nk+1)Q k (nk+1)))=\sum_{m=0}^\infty\mathbf{A}_m u^m-\sum_{m\prime=1}^\infty u^{m\prime} (\mathbf{A}_{m\prime-1}\mathbf{A} - \sum_{k=1}^{m\prime} \sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor}\mathbf{A}_{m\prime-n k}\mathbf{Q}_k^{(n k)} + \sum_{k=1}^{m\prime} \sum_{n=1}^{\left\lfloor\frac{m\prime}{k}\right\rfloor} \mathbf{A}_{m\prime-(n k+1)} \mathbf{Q}_k^{(n k+1)}))

= m=0 A mu m m=1 u mA m=\sum_{m=0}^\infty\mathbf{A}_m u^m-\sum_{m\prime=1}^\infty u^{m\prime} \mathbf{A}_{m\prime}

=A 0u 0=\mathbf{A}_0 u^0

=I=\mathbf{I}

Phew!

Now we do the same thing for

m=1 N mu m\sum_{m=1}^\infty N_m u^m

= m=1 (Tr(A m1A)Tr( k=1 m n=1 mk(nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k (nk+1))A 0Q k (m)+ψ(k,m)kA 0Q k (m)))u m=\sum_{m=1}^\infty(\mathop{Tr}(\mathbf{A}_{m-1}\mathbf{A})-\mathop{Tr}(\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-( n k+1)\mathbf{A}_{m-(n k+1)}\mathbf{Q}_k^{(n k+1)})-\mathbf{A}_0\mathbf{Q}_k^{(m)}+\psi(k,m)k\mathbf{A}_0\mathbf{Q}_k^{(m)}))u^m

We’ll only deal with the first and second terms, treating them in reverse order, since the second term is the hardest.

So first of all we’ll manipulate the expression

m=1 k=1 m n=1 mk(nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k nk+1)u m\sum_{m=1}^\infty\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-(n k+1)\mathbf{A}_{m- (n k+1)}\mathbf{Q}_k^{n k+1})u^m

to get it into a more convenient form:

m=1 k=1 m n=1 mk(nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k nk+1)u m\sum_{m=1}^\infty\sum_{k=1}^m\sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-(n k+1)\mathbf{A}_{m- (n k+1)}\mathbf{Q}_k^{n k+1})u^m

= k=1 m=k n=1 mk(nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k nk+1)u m=\sum_{k=1}^\infty\sum_{m=k}^\infty \sum_{n=1}^{\left\lfloor\frac{m}{k}\right\rfloor}(n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-(n k+1)\mathbf{A}_{m- (n k+1)}\mathbf{Q}_k^{n k+1})u^m

= k=1 n=1 m=nk (nkA mnkQ k (nk)(nk+1)A m(nk+1)Q k nk+1)u m=\sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m=n k}^\infty (n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)}-(n k+1)\mathbf{A}_{m- (n k+1)}\mathbf{Q}_k^{n k+1})u^m

= k=1 n=1 m=nk nkA mnkQ k (nk)u m k=1 n=1 m=nk (nk+1)A m(nk+1)Q k nk+1u m=\sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m=n k}^\infty n k\mathbf{A}_{m-n k}\mathbf{Q}_k^{(n k)} u^m - \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m=n k}^\infty (n k+1)\mathbf{A}_{m- (n k+1)}\mathbf{Q}_k^{n k+1}u^m

We substitute m=mnkm\prime=m-n k in the first triple sum and m=m(nk+1)m\prime=m-(n k+1) in the second triple sum:

k=1 n=1 m=0 nkA mQ k (nk)u m+nk k=1 n=1 m=1 (nk+1)A mQ k nk+1u m+nk+1\sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m\prime=0}^\infty n k\mathbf{A}_{m\prime}\mathbf{Q}_k^{(n k)} u^{m\prime+n k} - \sum_{k=1}^\infty\sum_{n=1}^\infty\sum_{m\prime=-1}^\infty (n k+1)\mathbf{A}_{m\prime}\mathbf{Q}_k^{n k+1}u^{m\prime+n k+1}

= k=1 n=1 nkQ k (nk)u nk m=0 A mu m k=1 n=1 (nk+1)Q k nk+1u nk+1 m=1 A mu m=\sum_{k=1}^\infty\sum_{n=1}^\infty n k\mathbf{Q}_k^{(n k)} u^{n k}\sum_{m\prime=0}^\infty\mathbf{A}_{m\prime}u^{m\prime} - \sum_{k=1}^\infty\sum_{n=1}^\infty (n k+1) \mathbf{Q}_k^{n k+1}u^{n k+1}\sum_{m\prime=-1}^\infty\mathbf{A}_{m\prime}u^{m\prime}

=( k=1 n=1 (nkQ k (nk)u nk(nk+1)Q k nk+1u nk+1))( m=0 A mu m)=(\sum_{k=1}^\infty\sum_{n=1}^\infty(n k\mathbf{Q}_k^{(n k)} u^{n k}- (n k+1)\mathbf{Q}_k^{n k+1}u^{n k+1})) (\sum_{m\prime=0}^\infty\mathbf{A}_{m\prime}u^{m\prime})

(where we have made use of the fact that A 1=0\mathbf{A}_{-1}=\mathbf{0}).

Since we’ve already shown that Ξ(u)\Xi(u) is the inverse of m=0 A mu m\sum_{m=0}^\infty\mathbf{A}_m u^m, multiplying this part by Ξ(u)\Xi(u) leaves behind just

k=1 n=1 nkQ k (nk)u nk(nk+1)Q k nk+1u nk+1\sum_{k=1}^\infty\sum_{n=1}^\infty n k\mathbf{Q}_k^{(n k)} u^{n k}- (n k+1)\mathbf{Q}_k^{n k+1}u^{n k+1}

As for the first part, m=1 A m1Au m\sum_{m=1}^\infty \mathbf{A}_{m-1}\mathbf{A}u^m, we substitute m=m1m\prime=m-1 and get

( m=1 A m1Au m)Ξ(u)(\sum_{m=1}^\infty\mathbf{A}_{m-1}\mathbf{A}u^m)\Xi(u)

=( m=0 A mAu m+1)Ξ(u)=(\sum_{m\prime=0}^\infty\mathbf{A}_{m\prime}\mathbf{A}u^{m\prime+1})\Xi(u)

=( m=0 A mu m)AuΞ(u)=(\sum_{m\prime=0}^\infty\mathbf{A}_{m\prime}u^{m\prime})\mathbf{A}u\Xi(u).

Up to conjugacy, which is all we need if we are taking the trace, this is just Au\mathbf{A}u.

So, at last,

m=1 N mu m=TrAu k=1 n=1 (nkQ k (nk)u nk(nk+1)Q k (nk+1)u nk+1)IAu+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1) k=1 m=k (Tr(A 0Q k (m))u mTr(ψ(k,m)kA 0Q k (m))u m)\sum_{m=1}^\infty N_m u^m=Tr\frac{\mathbf{A}u-\sum_{k=1}^\infty\sum_{n=1}^\infty\left(n k\mathbf{Q}_k^{(n k)}u^{n k}-(n k+1)\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}{\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\sum_{n=1}^\infty\left(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}-\sum_{k=1}^\infty\sum_{m=k}^\infty (Tr(\mathbf{A}_0\mathbf{Q}_k^{(m)})u^m-Tr(\psi(k,m)k\mathbf{A}_0\mathbf{Q}_k^{(m)})u^m)

=TrAu k=1 n=1 (nkQ k (nk)u nk(nk+1)Q k (nk+1)u nk+1)IAu+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1) k=1 m=k Tr(Q k (m))u m+ k=1 n=1 Tr(kQ k (nk))u nk=Tr\frac{\mathbf{A}u-\sum_{k=1}^\infty\sum_{n=1}^\infty\left(n k\mathbf{Q}_k^{(n k)}u^{n k}-(n k+1)\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}{\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\sum_{n=1}^\infty\left(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}-\sum_{k=1}^\infty\sum_{m=k}^\infty Tr(\mathbf{Q}_k^{(m)})u^m+\sum_{k=1}^\infty\sum_{n=1}^\infty Tr(k\mathbf{Q}_k^{(n k)})u^{n k}

Let’s first of all consider that big fraction.

Since the numerator is uddu-u\frac{\text{d}}{\text{d} u} of the denominator, we have

TrAu k=1 n=1 (nkQ k (nk)u nk(nk+1)Q k (nk+1)u nk+1)IAu+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1) Tr\frac{\mathbf{A}u-\sum_{k=1}^\infty\sum_{n=1}^\infty\left(n k\mathbf{Q}_k^{(n k)}u^{n k}-(n k+1)\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}{\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\sum_{n=1}^\infty\left(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1}\right)}

=Tr(udduln(IAu+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1)))=Tr(-u\frac{\text{d}}{\text{d} u}ln(\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\sum_{n=1}^\infty(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1})))

=Tr(uddulnΞ(u))=Tr(-u\frac{\text{d}}{\text{d}u}ln\Xi(u))

(where, just to remind ourselves,

Ξ(u)=IuA+ k=1 n=1 (Q k (nk)u nkQ k (nk+1)u nk+1)\Xi(u)=\mathbf{I}-u\mathbf{A}+\sum_{k=1}^\infty\sum_{n=1}^\infty(\mathbf{Q}_k^{(n k)}u^{n k}-\mathbf{Q}_k^{(n k+1)}u^{n k+1}).)

Finally, at this point, we’ll take notice of the fact that, since we are dealing with cycles, Q k (nk)=Q k (0)n\mathbf{Q}_k^{(n k)}=\mathbf{Q}_k^{(0)}\forall n, and Q k (nk+1)=Q k (1)n\mathbf{Q}_k^{(n k+1)}=\mathbf{Q}_k^{(1)}\forall n.

So we have

Ξ(u)=IAu+ k=1 (u k1u kQ k (0)u k+11u kQ k (1))\Xi(u)=\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\left(\frac{u^k}{1-u^k}\mathbf{Q}_k^{(0)}-\frac{u^{k+1}}{1-u^k}\mathbf{Q}_k^{(1)}\right)

=IAu+ k=1 u k1u k(Q k (0)uQ k (1))=\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\frac{u^k}{1-u^k}(\mathbf{Q}_k^{(0)}-u\mathbf{Q}_k^{(1)})

We can also apply this cyclic rule to the other terms, so we have

k=1 m=k Tr(Q k (m))u m= k=1 u k1u k m=0 k1Tr(Q k (m))u m-\sum_{k=1}^\infty\sum_{m=k}^\infty Tr(\mathbf{Q}_k^{(m)})u^m=-\sum_{k=1}^\infty\frac{u^k}{1-u^k}\sum_{m=0}^{k-1} Tr(\mathbf{Q}_k^{( m)})u^m

and

k=1 n=1 kTr(Q k (nk))u nk= k=1 ku k1u kTr(Q k (0))\sum_{k=1}^\infty \sum_{n=1}^\infty k Tr(\mathbf{Q}_k^{(n k)})u^{n k}=\sum_{k=1}^\infty k\frac{u^k}{1-u^k}Tr(\mathbf{Q}_k^{(0)})

=udduln(1u k)Tr(Q k (0))=-u \frac{\text{d}}{\text{d} u}ln(1-u^k)Tr(\mathbf{Q}_k^{(0)})

Now, let us introduce a simplification. If, as we have been assuming so far, each collapsible kk-cycle passes through each of its vertices only once, then Tr(Q k (m))Tr(\mathbf{Q}_k^{(m)}) is 00 unless k|mk\vert m. So k=1 u k1u k m=0 k1Tr(Q k (m))u m\sum_{k=1}^\infty\frac{u^k}{1-u^k}\sum_{m=0}^{k-1} Tr(\mathbf{Q}_k^{(m)})u^m reduces to k=1 u k1u kTr(Q k (0))\sum_{k=1}^\infty\frac{u^k}{1-u^k}Tr(\mathbf{Q}_k^{(0)})

Thus the last two terms together reduce to udduk1kln(1u k)Tr(Q k (0))-u \frac{\text{d}}{\text{d} u}\frac{k-1}{k}ln(1-u^k)Tr(\mathbf{Q}_k^{(0)})

Hence, finally,

m=1 N mu m=udduTr(lnΞ(u))udduk1kln(1u k)Tr(Q k (0))\sum_{m=1}^\infty N_m u^m=-u\frac{\text{d}}{\text{d}u}Tr(ln\Xi(u))- u \frac{\text{d}}{\text{d} u}\frac{k-1}{k}ln(1-u^k)Tr(\mathbf{Q}_k^{(0)})

But, by the same arguments as at the beginning of this post,

m=1 N mu m=uddulnZ(u)\sum_{m=1}^\infty N_m u^m=u\frac{\text{d}}{\text{d}u}ln Z(u).

In short,

Z(u) 1=|Ξ(u)| k=1 (1u k) k1kTr(Q k (0))Z(u)^{-1}=\vert\Xi(u)\vert \product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}

In the more general case, we can include the terms in Q k (m)\mathbf{Q}_k^{(m)} with mkm\ne k, which give us extra similar, but weirder, factors on the end of the product.

And that’s it! Well, that’s really the beginning rather than the end, but we’ve waded through enough grunge for the moment.

Part 5 in which we attempt to relate the foregoing results to things we already know, and to conceive of further generalisations

First, we just need to check a couple of things.

5a Specialising to match the Ihara zeta function

For the Ihara zeta function, we are only concerned with Q 2 (1)=A\mathbf{Q}_2^{(1)}=\mathbf{A} and Q 2 (0)=I+Q\mathbf{Q}_2^{(0)}=\mathbf{I}+\mathbf{Q}.

So Ξ(u)=IAu+u 21u 2I+u 21u 2Qu 21u 2Au\Xi(u)=\mathbf{I}-\mathbf{A}u+\frac{u^2}{1-u^2}\mathbf{I}+\frac{u^2}{1-u^2}\mathbf{Q}-\frac{u^2}{1-u^2}\mathbf{A}u

=11u 2I11u 2Au+u 21u 2Q=\frac{1}{1-u^2}\mathbf{I}-\frac{1}{1-u^2}\mathbf{A}u+\frac{u^2}{1-u^2}\mathbf{Q}

=11u 2(IAu+Qu 2)=\frac{1}{1-u^2}(\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2)

while

k=1 (1u k) k1kTr(Q k (0))=(1u 2) 12Tr(Q 2 (0))\product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}=(1-u^2)^{\frac{1}{2}Tr(\mathbf{Q}_2^{(0)})}

=(1u 2) 12Tr(Q+I)=(1-u^2)^{\frac{1}{2}Tr(\mathbf{Q}+\mathbf{I})}

So

Z(u) 1=|Ξ(u)| k=1 (1u k) k1kTr(Q k (0))Z(u)^{-1}=\vert\Xi(u)\vert \product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}

=|1(1u 2)(IAu+Qu 2)|(1u 2) 12Tr(Q+I)=\vert \frac{1}{(1-u^2)} (\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2)\vert (1-u^2)^{\frac{1}{2}Tr(\mathbf{Q}+\mathbf{I})}

=|IAu+Qu 2|(1u 2) 12Tr(QI)=\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2\vert (1-u^2)^{\frac{1}{2}Tr(\mathbf{Q}-\mathbf{I})}

=|IAu+Qu 2|(1u 2) 12(2E2V)=\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2\vert (1-u^2)^{\frac{1}{2}(2E-2V)}

(where EE is the number of edges and VV is the number of vertices)

=|IAu+Qu 2|(1u 2) χ=\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^2\vert (1-u^2)^{-\chi}

as expected.

5b Getting the Euler characteristic

Now let us take a Leinsteresque look at that matrix Ξ(u)=IAu+ k=1 u k1u k(Q k (0)uQ k (1))\Xi(u)=\mathbf{I}-\mathbf{A}u+\sum_{k=1}^\infty\frac{u^k}{1-u^k}(\mathbf{Q}_k^{(0)}-u\mathbf{Q}_k^{(1)}) by adding up all its elements and then letting u1u\rightarrow 1.

There are two straightforward bits:

entriesI=VV\sum_{entries} \mathbf{I} = V\rightarrow V

entriesAu=EuE\sum_{entries} -\mathbf{A}u = -E u\rightarrow-E.

Perhaps you can see where this is heading, but let’s check properly.

Q k (0)\mathbf{Q}_k^{(0)} has one entry of 11 for each vertex of each collapsible cycle (aka face) with kk vertices, so entriesQ k (0)=kF k\sum_{entries}\mathbf{Q}_k^{(0)}=k F_k where F kF_k is the number of faces with kk vertices (or edges). Exactly the same is true of Q k (1)\mathbf{Q}_k^{(1)}, although the 11s are scattered hither and yon, rather than being concentrated on the diagonal.

So

entries( k=1 u k1u k(Q k (0)uQ k (1)))\sum_{entries}\left(\sum_{k=1}^\infty\frac{u^k}{1-u^k}(\mathbf{Q}_k^{(0)}-u\mathbf{Q}_k^{(1)})\right)

= k=1 (u k1u k(1u)kF k)=\sum_{k=1}^\infty\left(\frac{u^k}{1-u^k}(1-u)k F_k \right)

= k=1 (u k11+u++u k1kF k)=\sum_{k=1}^\infty\left({u^k}\frac{1}{1+u+\dots+u^{k-1}}k F_k \right)

k=1 1kkF k\rightarrow\sum_{k=1}^\infty\frac{1}{k}k F_k

= k=1 F k=\sum_{k=1}^\infty F_k

=F=F.

(where FF is of course the total number of faces).

So entriesΞ(u)VE+F=χ\sum_{entries}\Xi(u)\rightarrow V-E+F=\chi.

Which is good: we reproduce the Leinster result, at least if we ignore those irritating factors of (1u k) k1kTr(Q k (0))(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}.

We can also try to reproduce the Leinster weightings (or co-weightings), at least in this invertible case, by multiplying Ξ(u)\Xi(u) by the column vector of all 11s, and again letting u1u\rightarrow 1. The result is straightforward: each vertex gets assigned 11 by I\mathbf{I}, minus the number of outgoing edges (or incoming edges for a co-weighting) due to Au-\mathbf{A}u, plus 1k\frac{1}{k} for each collapsible kk-cycle that it lies on. Which seems reasonable.

5c A generalisation of Ihara

We can also try to generalise the Ihara formula for kk other than 22. If all the collapsible cycles have the same number kk of vertices, and every edge lies on exactly one kk-cycle, then Q k (1)=A\mathbf{Q}_k^{(1)}=\mathbf{A}, and if we define Q=Q k (0)I\mathbf{Q}=\mathbf{Q}_k^{(0)}-\mathbf{I}, we get

Ξ(u)=IAu+u k1u kI+u k1u kQu k1u kAu\Xi(u)=\mathbf{I}-\mathbf{A}u+\frac{u^k}{1-u^k}\mathbf{I}+\frac{u^k}{1-u^k}\mathbf{Q}-\frac{u^k}{1-u^k}\mathbf{A}u

=11u kI11u kAu+u k1u kQ=\frac{1}{1-u^k}\mathbf{I}-\frac{1}{1-u^k}\mathbf{A}u+\frac{u^k}{1-u^k}\mathbf{Q}

=11u k(IAu+Qu k)=\frac{1}{1-u^k}(\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k)

while

k=1 (1u k) k1kTr(Q k (0))=(1u k) k1kTr(Q k (0))\product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}=(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}

=(1u k) k1kkF k=(1-u^k)^{\frac{k-1}{k}k F_k}

=(1u k) (k1)F k=(1-u^k)^{(k-1)F_k}

=(1u k) kFF=(1-u^k)^{k F-F}

Bearing in mind that every edge belongs to a kk-cycle, kF=Ek F=E.

So

Z(u) 1=|Ξ(u)| k=1 (1u k) k1kTr(Q k (0))Z(u)^{-1}=\vert\Xi(u)\vert \product_{k=1}^\infty(1-u^k)^{\frac{k-1}{k}Tr(\mathbf{Q}_k^{(0)})}

=1(1u k) V|IAu+Qu k|(1u k) EF=\frac{1}{(1-u^k)^V}\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k\vert (1-u^k)^{E-F}

=|IAu+Qu k|(1u k) V+EF=\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k\vert (1-u^k)^{-V+E-F}

=|IAu+Qu k|(1u k) χ=\vert\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k\vert (1-u^k)^{-\chi}

5d Zeros and poles at u=1u=1

In fact, we can do a bit better than this. If every edge lies on some collapsible cycle, where the cycles are not necessarily all the same length, we get a product of terms of the form

k=1 (1u k) χ k\left.\product_{k=1}^\infty(1-u^k)^{-\chi_k}\right.

This will have a zero of order χ-\chi at u=1u=1 (where a pole of order nn counts as a zero of order n-n).

In the case k=2k=2, (IAu+Qu k)(\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k) tends to the graph Laplacian as u1u\rightarrow 1. The graph Laplacian has a number of zero eigenvalues equal to the number of (strongly connected) components of the graph, where we define two points to belong to the same strongly connected component iff there is a path from each one to the other (i.e. not just one way). (Obviously all connected components are strongly connected if every edge belongs to a cycle, a fortiori if it belongs to a collapsible cycle.) The number of zero eigenvalues will also be the order of the zero of the expression (IAu+Qu k)(\mathbf{I}-\mathbf{A}u+\mathbf{Q}u^k) at u=1u=1. So, considering Z(u)Z(u), we get a zero (or pole) at u=1u=1 of order the extended Euler characteristic C+VE+F-C+V-E+F, where we subtract the number of components CC (i.e. 1-1-cells).

This seems as though it ought to be part of a bigger pattern. It certainly applies to some cases other than k=2k=2. But I have no idea how widely it really applies or what the conditions are that we need for it to work, or what the bigger pattern really is.

5e More general kinds of finitely generated 2-categories

Now I want to talk vaguely about a couple of generalisations.

The first thing I want to discuss is what happens when an edge is shared between two or more collapsible kk-cycles.

There are two reasonable-sounding possibilities. The first is that we take the matrix for each kk-cycle separately (even if they overlap), take the nnth power of each of those matrices separately, and then add up the results, to get Q k (n)\mathbf{Q}_k^{(n)}. The alternative is to take the adjacency matrix for the collapsible part of the graph, and take nnth powers of that.

The reason we might want to take the second choice is that, if we don’t, the path-counting can come out funny, with negative numbers of paths in unexpected places. For a while, this sounded like a convincing reason.

But after further thought, I became convinced that, if I was interested in Euler characteristics, the negative path numbers, etc, were not a problem, and my original scheme—the first one—made more sense. With generating functions as a general rule, we are counting members of a set. But the point about path-counting in a 2-graph is that we have not a set, but a category of paths; and so we can expect our generating function to contain category cardinalities rather than set cardinalities. So fractional or negative numbers are only to be expected.

Category cardinalities feel somehow more dynamic than groupoid cardinalities. With groupoids, every object in a component is pretty much interchangeable, so it doesn’t seem to require much thought to throw away every object but one. With the Leinster style of category cardinalities, on the other hand, although the results are the same, the category seems to be telling you how to get rid of the objects: remove this object down this arrow! (And now remove it again, down this other arrow!—there’s our negative weighting.)

We can see why this is important in the case of paths and generating functions by considering what the category of paths looks like. I mean the category whose objects are paths and whose morphisms are (certain specified) homotopies of paths. This category is in fact generated by a graph. I don’t mean the underlying graph of the 2-category of paths; I mean the underlying graph of the 1-category paths as objects: the vertices of the graph (objects of the category) are paths; each path has an edge emanating from it for each collapsible cycle that it contains, sending it to the shorter path with that cycle removed; and the morphisms are then the paths in this graph: catenations of edges. We push our way from a path with collapsible cycles all the way down the available edges as far as we can go, to the homotopic path with no collapsible cycles in it.

The reason we need to do this is that we are not just counting homotopy classes; we are also assigning them a norm, so we know where to put them in the generating function. Since this norm is a path length, we need to pick a path. And the path we pick is the one at the end of the line in its component of the path category.

This does raise the question of what to do if we have a different path category, e.g one which inserts cycles or contains cycles itself. I was going to make some remarks about this, but the only definite conclusion I’ve come to is that it’s harder than it looks.

It does suggest, however, that we pay attention to the fact that, if we treat each collapsible cycle as a separate graph in its own right, then Q k (0)\mathbf{Q}_k^{(0)} restricted to one collapsible cycle is simply the identity I\mathbf{I} on that subgraph, while Q k (1)\mathbf{Q}_k^{(1)} is the adjacency operator A\mathbf{A} on the subgraph, so, letting 𝔉\mathfrak{F} be the set of faces, we can rewrite Ξ(u)\Xi(u) as

IAu+ F𝔉u k(F)1u k(F)(I FA Fu)\mathbf{I}-\mathbf{A}u+\left.\sum_{F\in\mathfrak{F}}\frac{u^{k(F)}}{1-u^{k(F)}}(\mathbf{I}_F-\mathbf{A}_F u)\right.

The second thing I want to talk about is what to do if a collapsible cycle includes the same vertex more than once. Some brief sketches indicate that this should work so long as Q k (n)\mathbf{Q}_k^{(n)} is exactly the matrix of numbers of paths going nn steps around the collapsible cycle (and not the nnth power of Q k (1)\mathbf{Q}_k^{(1)}). We also need to remember that Q k (n)\mathbf{Q}_k^{(n)} will have non-zero trace even if nn is not a multiple of kk, so we need need to use the extra factors corresponding to this.

I wanted to say a bit more about this, but it started to grow out of control, and I think it would fit in better elsewhere, so maybe I will leave it for a later post.

The final thing I want to say is that since the main result is so simple, there really ought to be a nice combinatorial argument for it that doesn’t involve lots of multiple infinite sums over large, complicated expressions. Here’s hoping!

Posted by: Tim Silverman on June 2, 2007 1:46 PM | Permalink | Reply to this
Read the post QFT of Charged n-Particle: The Canonical 1-Particle
Weblog: The n-Category Café
Excerpt: On the category of paths whose canonical Leinster measure reproduces the path integral measure appearing in the quantization of the charged particle.
Tracked: March 19, 2007 10:43 PM
Read the post Report-Back on BMC
Weblog: The n-Category Café
Excerpt: Bruce Bartlett reports from the British Mathematics Colloquium 2007
Tracked: April 22, 2007 8:17 PM
Read the post Metric Spaces
Weblog: The n-Category Café
Excerpt: Tom Leinster on the cardinality of a metric space, thought of as an enriched category --- and an interesting relation to geometric measure theory.
Tracked: February 9, 2008 7:32 PM
Read the post News on Measures on Groupoids?
Weblog: The n-Category Café
Excerpt: Benjamin Bahr apparently thought about measures on groupoids of connections.
Tracked: July 17, 2008 10:00 AM
Read the post Magnitude of Metric Spaces: A Roundup
Weblog: The n-Category Café
Excerpt: Resources on magnitude of metric spaces.
Tracked: January 10, 2011 4:04 PM

Post a New Comment