The Nucleus of a Profunctor: Some Categorified Linear Algebra
Posted by Simon Willerton
Most of us learnt as undergraduates that from an -matrix you get two linear maps and , and that these are adjoint with respect to the standard inner products on and : I want to explain how I think of (at least part of) an enriched category construction, the nucleus of a profunctor, as being analogous. I learnt about the nucleus idea in a recent paper by Dusko Pavlovic.
Quantitiative Concept Analysis, in Formal Concept Analysis, Lecture Notes in Computer Science, Volume 7278 (2012) 260–277; also arxiv:1204.5802
The construction does seem to have been described before for categories enriched over quantales in the following paper, but I don’t know if it’s known in the category theoretic community at large.
Javier Gutiérrez García, Iraide Mardones-Pérez, María Angeles de Prada Vicente, and Dexue Zhang, Fuzzy Galois connections categorically, Mathematical Logic Quarterly 56, No. 2, 131–147 (2010).
Let’s fix a suitable category to enrich over. Pick if you are more comfortable with ordinary categories, so that a -category is just an ordinary small category.
The analogue of a matrix in the world of -categories is a profunctor . This gives rise to two -functors and which are adjoint What we will actually be interested in the nucleus, which is the centre, or invariant part, of this adjunction. In the linear algebra world, the analogue of this is the fixed point set of or equivalently the fixed point set of . Rather naughtily, I won’t motivate the construction in this post, but in the next post, hopefully, I will discuss Formal (and Fuzzy) Concept Analysis and describe what this has to do with it.
The observant amongst you might have observed that this is a generalization of the Isbell completion of a category which I have mentioned here before. In that case you work with the identity profunctor, .
Some undergraduate linear algebra (done a bit oddly)
I will change the notation slightly from the introduction and work with two finite sets and rather than two integers and . If we pick an ordering on and then we can identify them with and .
Associated to and are the free-vector spaces on and , namely and . We can think of in at least two way: on the one hand as the set of formal linear combinations of elements of ; and on the other hand as the -valued functions on . We will think of it here as the -valued functions, and so if is a vector and then we will write for the component of the vector. As undergraduates, we would more likely write for this.
The vector spaces and have canonical inner products on them, namely, for we have Starting with an -matrix we can get a pair of adjoint maps and . In each case, we form the map in two steps.
For the first map, starting with the matrix , we take the adjoint (such an over worked word!) to get a function given by . We then use the fact that is the free vector space on so there is a unique linear extension of that function to . This is the linear map that you think it is: As undergraduates we would write this as , if we were using the Einstein summation convention.
Similarly, for the second map we perform two steps. Firstly we start with and take the adjoint function given by . Then we extend this by linearity to get a linear function which is given by In undergraduate notation we might write .
Now these two functions we have constructed are adjoint in that this is easy to see as both left and right hand side are equal to I’ve gone through that in some gory detail, so that the ideas of the categorification I’m about to do will look analgous.
Some categorified linear algebra
Let’s fix a suitable category to enrich over, this should be closed symmetric and sufficiently complete and cocomplete. As I said above, think of if you are more comfortable with ordinary categories, so that a -category is just an ordinary small category.
I like thinking about metric-like things so I would find interesting, although a good, motivating example will also come from taking so that -categories are preorders. We won’t see this example until next time though. Suppose that and are -categories. Associated to (and similarly to ) are the -categories of presheaves and opcopresheaves . Both of these should be thought of as being like scalar-valued functions on , that is to say, analogues of .
The category of presheaves is the functor -category , so an object of is a functor . Associated to a pair of presheaves is not a number, but rather the hom--object: Here the square brackets mean the internal hom in . You should compare this formula to the definition of the inner product on : we had .
Analogous to being the free vector space on , we have that the category of presheaves is the free cocomplete category on , which means that if is a functor to a cocomplete category then there is a unique (up to isomorphism) cocontinuous functor extending .
Similarly, the category of opcopresheaves is the -category .
Now suppose that is a profunctor from to , this means, using Pavlovic’s convention, that it is a functor . This is clearly the analogue of a matrix. As we did with ordinary linear algebra, we will produce a pair of adjoint functors and .
To produce these functors we proceed as we did above. For , we take the adjoint of the profunctor to get a functor given by . As the presheaf category is complete, we can take the unique continuous extension to the free completion of , to get the continuous functor . You can show that this has the following form Again this needs comparing with the comparable formula in linear algebra: .
Similarly for , we take the adjoint of the profunctor to get a functor which is the same as a functor given by . As the presheaf category is cocomplete, we can take the unique cocontinuous extension to the presheaves category on , to get the cocontinuous functor . You can show that this has the following form Again this needs comparing with the comparable formula in linear algebra: .
Now we have a cocontinuous functor and a continuous functor . You will not be surprised to learn that the former is left adjoint to the latter: It is not difficult to show that both sides are isomorphic to . Again the comparison with the linear algebra case is telling: .
The nucleus of a profunctor
The nucleus of the profunctor is going to be the centre, or invariant part, of the above adjunction, so I will explain what the centre is in general. I don’t know if this is standard terminology as I learnt of it in discussion with Tom Leinster and haven’t seen it in the literature.
Suppose and form an adjunction . Then the following are equivalent categories (I’ll just state the objects, for simplicity). We can think of any one of these as being the centre of the adjunction.
The nucleus of a profunctor is then defined to be the centre of the adjunction .
In the linear algebra world, the analogue of the nucleus is – or in more standard notation – the fixed point set of the composite of a linear map with its adjoint. It looks like it could be a standard object of study. Can anyone point me to where this is an interesting object?
In perhaps the simplest case, when we consider the identity, or hom, profunctor, , the nucleus is the Isbell completion of the -category which I have mentioned here at the Café.
I believe that the nucleus idea does crop up in various parts of mathematics, but I’m not really sure of how pervasive it is. Next time I will endeavour to explain what it has to do with formal (and fuzzy) concept analysis.
Re: The Nucleus of a Profunctor: Some Categorified Linear Algebra
Interesting — I’m looking forward to hearing more.
What this makes me think of is the singular values of a matrix. Let me assume we’re working over either or , and given a matrix , let me write for either the transpose or the conjugate transpose, respectively.
The theory of eigenvalues and eigenvectors is at its most satisfying for matrices that are self-adjoint (that is, symmetric or Hermitian): e.g. then we have an orthogonal basis of eigenvectors. So, given a matrix that isn’t self-adjoint, we might seek to turn it into a matrix that is, then consider its eigenvalues and eigenvectors. This is done as follows.
Let be any matrix. Then is positive semidefinite, and a general result tells us that it therefore has a unique positive semidefinite square root, . This new matrix is self-adjoint.
So, we’ve succeeded in turning an arbitrary matrix into a self-adjoint matrix, , in a canonical way. It bears a close relationship to , at least if is a square matrix: for then for some isometry (i.e. orthogonal or unitary matrix ).
(One place to find this stuff is Chapter 7 of Axler’s Linear Algebra Done Right. The last result I mentioned is the polar decomposition theorem, 7.41.)
The singular values of are the eigenvalues of . They are always nonnegative, since is positive semidefinite.
Now, what are the fixed points of ? I claim they are the same as the fixed points of . Indeed,
But is injective (since is not an eigenvalue of ), so this is equivalent to , as claimed.
Hence the fixed points of are the eigenvectors of with eigenvalue 1. I don’t know whether there’s a standard name for the eigenvectors of . Since the eigenvalues of are called the singular values of , I want to say that the eigenvectors of are called the singular vectors of . Anyway, in that language, the fixed points of are the singular vectors of with singular value .