Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 19, 2012


Posted by Urs Schreiber

guest post by Jamie Vicary

There are lots of computations in higher linear algebra that can be difficult to carry out; not because any of the individual steps are difficult, but because the calculation as a whole is long and introduces many opportunities to make mistakes. A student of mine, Dan Roberts, wrote an impressive computer package called TwoVect as an add-on to Mathematica to help with these calculations, and I’d like to tell you about it!

It’s already being used in earnest by Chris Douglas to find some new modular tensor categories, and by me and Dan for some projects in topological quantum field theory and quantum information. So we’re hopeful that the early bugs have been worked out, and we’re happy to show it to the world.

If you like semisimple monoidal categories, and ever spend time checking pentagon or hexagon equations, or computing the value of a string diagram, or verifying the axioms for a Frobenius algebra or a Hopf algebra, or checking properties like modularity or rigidity, or showing that your extended topological quantum field theory satisfies the right axioms, or checking the equations for a bimodule between two tensor categories — or you like the idea of using Mathematica’s built-in solvers to help with these tasks — or you just want to know what this stuff is all about! — read on.

Higher linear algebra

What is higher linear algebra, anyway? It’s the algebra of 2-vector spaces, linear functors and natural transformations. This is a beautiful and important extension of ordinary linear algebra. In my opinion, it’s category theory at its best: it is shocking, and wonderful, to learn that something as straightfoward and familiar as linear algebra, which you thought had no secrets left to hide, is in fact just a tiny corner of a vast new mathematical landscape. These 2-vector spaces are used in all sorts of areas, including topological quantum field theory, quantum algebra, representation theory, and even quantum information. And of course, they are also studied for their own sake.

The 2-vector spaces we’re interested in are Vect \mathbf{Vect}_\mathbb{C}-enriched categories. We assume that they’re semisimple, meaning that there is a zero object, and a family of nonisomorphic simple objects {A,B,C,}\{A,B,C,\ldots\} whose endomorphism spaces are \mathbb{C}, and which produce every other object by taking finite biproducts. We also assume finite-dimensionality, meaning we can make do with a finite number of these simple objects. With a few notable exceptions, almost everyone who works with 2-vector spaces also makes these assumptions, so they’re not overly restrictive.

Matrix calculations

To do a calculation in ordinary finite-dimensional linear algebra, it can be useful to write your linear maps as matrices of complex numbers. Composites and tensor products can then be directly calculated, and existing computer algebra packages such as Mathematica provide all the tools necessary to do this automatically.

When it comes to higher linear algebra, we can also use a matrix calculus, but it’s more complicated: our natural transformations are matrices of matrices of complex numbers. Here’s an example:

(1)α=((1) 0 0,0 0 0,0 (1) 0 0,0 (1) (1) (1) 0 0,0 (1) (1) (1) (1) (1) (1) (ϕ 1 ϕ 12 ϕ 12 ϕ 1)) \alpha = \begin{pmatrix} \begin{pmatrix} 1 \end{pmatrix} & 0 _{0,0} & 0 _{0,0} & \begin{pmatrix} 1 \end{pmatrix} & 0 _{0,0} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} \\ 0 _{0,0} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} \phi^{-1} & \phi ^{-\frac{1}{2}} \\ \phi^{-\frac{1}{2}} & - \phi ^{-1} \end{pmatrix} \end{pmatrix}

The entries ‘0 0,00_{0,0}’ represent the zero map from the zero-dimensional vector space to itself, and ϕ\phi is a complex parameter. In fact, this isn’t any old example: it’s the associator for the modular tensor category Fib of ‘Fibonacci anyons’, for which ϕ=12(1+5)\phi = \frac{1}{2} (1 + \sqrt{5}) is the golden ratio. As such, it should satisfy the pentagon equation for a monoidal category.

But how to check this? There are three different ways to combine these matrices of matrices: tensor product, vertical composition and horizontal composition. We need to use all of them together to check that α\alpha above satisfies the pentagon equation. While this can definitely be done by hand, it’s a delicate calculation to say the least.

But this is just the start of it. What if you had a bigger associator? What if you wanted to check something more complicated? Things get unmanageable very, very fast.

The package

Dan’s package implements a completely skeletal version 2Vect {}_\mathbb{C}, the symmetric monoidal 2-category of finite-dimensional complex 2-vector spaces. This is exactly what you want for a computer implementation, as it gives you the best control possible over the entities which the system has to represent:

  • 0-cells are natural numbers;
  • 1-cells are matrices of natural numbers;
  • 2-cells are matrices of matrices of complex numbers.

These sorts of things are easy to represent in Mathematica. The package implements the three ways in which these entities can be composed, which are all binary operations:

  • Tensor product \bigodot for which the arguments can be 0-cells, 1-cells or 2-cells;
  • Horizontal composition \circ for which the arguments can be 1-cells or 2-cells;
  • Vertical composition \cdot for which the arguments must be 2-cells.

These are pretty combinatorially tricky, so it’s nice for the package to be able to do these calculations. But the real power of the package lies in its ability to calculate the nontrivial structural 2-cells that account for weakness of horizontal composition and tensor product:

  • The interchanger: τ f,g,h,i:(fg)(hi)(fh)(gi)\tau_{f,g,h,i} : (f \circ g) \odot (h \circ i) \to (f \odot h) \circ (g \odot i);
  • The compositor: ω f,g,h:f(gh)(fg)h\omega_{f,g,h} : f \circ (g \circ h) \to (f \circ g) \circ h;
  • The symmetrizer: σ f,g:swap(fg)(gf)swap\sigma_{f,g}: \mathrm{swap} \circ (f \odot g) \to (g \odot f) \circ \mathrm{swap}.

Using these structures, an equation such as the pentagon equation for a monoidal category can be coded once-and-for-all. Then if you come up with an associator for a new monoidal category, all you have to do is type it in, and let the package check whether the pentagon equation is solved. Or maybe you know some of the entries to the associator, but not others — then you can leave them as variables, tell the package to generate the pentagon equation, and then get Mathematica to try to find solutions to the resulting equations.

The future

There’s still loads more we’d like this package to be able to do! There’s plenty of scope to add more higher algebraic equations, so they’re there hard-coded if people want to make use of them. The core algorithms at the heart of the package have hardly been optimized at all, so there’s significant scope to improve their speed. It would also be great to leverage Nick Gurski’s and Angelica Osorno’s new coherence result to have the structural 2-cells τ\tau, ω\omega and σ\sigma inserted automatically where appropriate. And this is just the beginning.

If you want to help out, or you want some help using or understanding the package, get in touch! There is also a user guide available on the webpage, which is over at the nnLab.

Posted at November 19, 2012 8:48 PM UTC

TrackBack URL for this Entry:

16 Comments & 0 Trackbacks

Re: TwoVect

Thanks Dan and Jamie for all this work and this contribution.

Posted by: Bruce Bartlett on November 20, 2012 8:31 AM | Permalink | Reply to this

Re: TwoVect

Under the category, “Mathematical software capable of checking the pentagon equation in a monoidal category” I am now aware of the following three packages. Perhaps someone could remind me of the third one, and any others?

1. The package from the Fusion Atlas.

2. TwoVect. (Of course, TwoVect can do lots of other stuff too)

3. A third package I saw on someone’s webpage but I can’t find it right now :-(

Posted by: Bruce Bartlett on November 20, 2012 8:36 AM | Permalink | Reply to this

Re: TwoVect

Thanks for pointing out the Fusion Atlas. I wonder if that is being actively developed or used; it says at the bottom of the wiki page that it was last modified in 2008. Their stuff is all built in Mathematica as well, so there’s good potential for cross-pollination.

Posted by: Jamie Vicary on November 20, 2012 6:33 PM | Permalink | Reply to this

Re: TwoVect

Wonderful!!! Has someone thought into generalize the package into n-Vect linear algebra? Are there notes about n-Vect linear algebra somewhere on the web? I want to learn more about higher linear algebra and higher multilinear algebra! Please, I want more data/information!!!!!

Posted by: Amarashiki on November 20, 2012 12:09 PM | Permalink | Reply to this

Re: TwoVect

It would be great to extend to nn-linear algebra. Ideally we would have a single package that could handle any finite value of nn. I wonder how that could work.

Go here for some good introductory information (and plenty of details too) about 2-vector spaces.

Posted by: Jamie Vicary on November 20, 2012 6:38 PM | Permalink | Reply to this

Re: TwoVect

This is pretty awesome!

Posted by: Mike Shulman on November 20, 2012 9:53 PM | Permalink | Reply to this

Re: TwoVect

Thanks Mike!

This could be described as a computational framework that assists with higher-categorical reasoning — as could Coq. Do you see any connections?

Posted by: Jamie Vicary on November 20, 2012 11:26 PM | Permalink | Reply to this

Re: TwoVect

Offhand, no more than the sentence you just wrote. (-: Coq is a proof assistant: it helps you write fully formalized proofs and verify that they are correct. That’s different from performing calculations. Of course there is some overlap: Coq’s proof automation can be used for a certain amount of computation, and it sounds like TwoVect can ‘check’ the truth of equations or something? (I know next to nothing about Mathematica.) One day, maybe there will be a single tool that can both do the calculations and verify formally that they are correct.

Posted by: Mike Shulman on November 21, 2012 2:38 AM | Permalink | Reply to this

Re: TwoVect

It seems to me that Coq works at the syntactic level, whereas TwoVect represents a particular linear semantics that is sometimes useful. So we shouldn’t ask whether they can do the same tasks; we should see whether there’s a meaningful way to map the syntax to the semantics.

Posted by: Jamie Vicary on November 21, 2012 9:52 AM | Permalink | Reply to this

Re: TwoVect

What do you mean by “syntax”? Being a formal system, Coq manipulates syntactical objects, but the meaning that we have in mind for that syntax is mathematics. And TwoVect is also a formal system of some sort, right? So it must be manipulating some syntactic objects that we also think of as meaning some kind of mathematics. Can you explain what you are thinking of as the difference?

Posted by: Mike Shulman on November 21, 2012 4:24 PM | Permalink | Reply to this

Re: TwoVect

TwoVect is an implementation of a particular symmetric monoidal 2-category. As far as my understanding goes, Coq is quite different: it is a formal system for working with expressions written in a particularly rich type theory (the ‘syntax’.)

By ‘semantics’, I mean a model for the appropriate $n$-truncated part of the type theory, in the sense that Hoffman and Streicher famously showed that Gpd is a good model for intensional type theory.

Posted by: Jamie Vicary on November 21, 2012 6:35 PM | Permalink | Reply to this

Re: TwoVect

Coq is more general than TwoVect, to be sure, in the same way that set theory is more general than the theory of a group. But both set theory and the theory of a group are (or can be expressed as) first-order theories, and it seems to me that similarly both Coq and TwoVect are manipulating formal systems which we think of as modeling some “semantic” object. In the case of TwoVect, that semantic object is a particular symmetric monoidal 2-category; in the case of Coq it is (in my case) some (,1)(\infty,1)-topos.

Posted by: Mike Shulman on November 27, 2012 5:54 AM | Permalink | Reply to this

Re: TwoVect

Is there anything to learn from your philosophy

…this way of thinking – freeing ourselves from our perceived need to distinguish between what I called the “two activities” of mathematics, namely “constructing” and “proving” – is the way of the future,

about how to integrate computer algebra and type theoretic theorem provers? Isn’t a calculation often the construction of an identity between two terms? What difference is there between calculating the fundamental group of the circle and proving that it is \mathbb{Z}?

There must be plenty of work talking about the interface, e.g., View of computer algebra data from Coq.

Posted by: David Corfield on November 21, 2012 12:31 PM | Permalink | Reply to this

Re: TwoVect

Well, that philosophy suggests that eventually, there will be no difference. You’ll have one “computer mathematics system” which can do the same sort of calculations as Mathematica, while simultaneously automatically producing proofs of their correctness, and can also interact with you when performing a more complicated proof the way that Coq does. If you need to perform a calculation in the middle of a more “manual” proof, you’ll just issue the appropriate command and the computer will do the calculation for you, inserting the automatically generated proof of correctness into the proof object you’re building. (Coq can already do this to a limited extent, using its “tactics” for manipulating algebra.)

Isn’t a calculation often the construction of an identity between two terms? What difference is there between calculating the fundamental group of the circle and proving that it is \mathbb{Z}?

Well, mathematicians often use the word “calculating” more generally that computer scientists might, to refer to a process which is not algorithmic. In general, I agree that we “calculate” some complicated object by constructing (i.e. proving) an equality between it and something simpler or more familiar, but in general there may be no algorithm which produces that simpler object or the proof of equality. Certainly the “calculation” of π 1()\pi_1(\mathbb{Z}) is not just an algorithm that you can “run”. So I think that there is a difference between “calculations” which are just algorithms that the computer can always do for you and automatically produce an equality proof, versus “calculations” that are also constructions of equality proofs, but require the insight of a human mathematician.

Posted by: Mike Shulman on November 27, 2012 6:02 AM | Permalink | Reply to this

Re: TwoVect

So if the use of “calculation” spreads beyond the algorithmic to the insight-requiring, can’t we see use of “proof” potentially spreading in reverse from the insight-requiring to the algorithmic?

I looked at the automated theorem proving community’s work back in around 1999-2001. It’s true the achievements weren’t terribly impressive. The best hope seemed to be in areas like the verification of computer chip specification. Presumably there is some possibility in a well-delimited task for automation. I wonder what the latest on mathematical induction is. Can I get a machine to answer, say, questions 1 and 3 of these? Judging by these slides, it’s possible to subject a problem to an algorithmic battery of heuristics.

Anyway, I guess the issue is that perhaps the algorithmic/non-algorithmic distinction is more important than the proof/calculation distinction.

Posted by: David Corfield on November 27, 2012 9:25 AM | Permalink | Reply to this

Re: TwoVect

can’t we see use of “proof” potentially spreading in reverse from the insight-requiring to the algorithmic?

Of course! A lot of the everyday “users” of Coq do a lot of proof automation with it. They tend to be people verifying the correctness of software, in which case the proofs to be produced are frequently very “wide” (many many special cases to check) but “shallow” (each case is easily dispatched). For this it is much better to have a computer do the proving, since a human will get bored and make mistakes. (This is in contrast to many proofs in pure mathematics, which tend to be “narrow” and “deep”. Perhaps these words are a different way of describing “algorithmic” and “insight-requiring”.)

I found it very interesting, when in a community of such people, to hear them speak of proof engineering. In the wide and shallow case, at least, I think proofs are definitely already something “algorithmic”, which you can mass-produce, and for which you must think about engineering questions rather than conceptual ones.

Posted by: Mike Shulman on November 27, 2012 1:36 PM | Permalink | Reply to this

Post a New Comment