Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 1, 2017

Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Posted by John Baez

Vladimir Voevodsky died this Saturday. He was 51.

I met him a couple of times, and have a few stories, but I think I’ll just quote the Institute for Advanced Studies obituary and open up the thread for memories and conversation.

The Institute for Advanced Study is deeply saddened by the passing of Vladimir Voevodsky, Professor in the School of Mathematics.

Voevodsky, a truly extraordinary and original mathematician, made many contributions to the field of mathematics, earning him numerous honors and awards, including the Fields Medal.

Celebrated for tackling the most difficult problems in abstract algebraic geometry, Voevodsky focused on the homotopy theory of schemes, algebraic K-theory, and interrelations between algebraic geometry, and algebraic topology. He made one of the most outstanding advances in algebraic geometry in the past few decades by developing new cohomology theories for algebraic varieties. Among the consequences of his work are the solutions of the Milnor and Bloch-Kato Conjectures.

More recently he became interested in type-theoretic formalizations of mathematics and automated proof verification. He was working on new foundations of mathematics based on homotopy-theoretic semantics of Martin-Löf type theories. His new “Univalence Axiom” has had a dramatic impact in both mathematics and computer science.

A gathering to celebrate Voevodsky’s life and legacy is being planned and more information will be available soon.

Posted at October 1, 2017 10:43 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2986

30 Comments & 0 Trackbacks

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

I felt Voevodsky to be one of the most profoundly individual and creative mathematicians of our time. My impression is that he was not at all interested in following trends or of filling in the details of a sketched picture, he was the kind of person who paved the way for others, whose mathematics had a very personal vision to it.

I was at the IHES birthday conference for Grothendieck in January 2009, where Voevodsky gave a really nice talk going through the proof of the Bloch-Kato conjectures, which he began with an announcement the formal completion of that proof. I was struck by how well he went through everything, including working out on the spot something he had forgotten some details of and under some questioning from Serre, despite his interests having been very much elsewhere for a number of years at that point.

I was very much interested in his work on motives at that time, and his work was extremely beautiful. He also had a great gift for exposition.

I also remember fondly a blog post of John Baez’s here where he linked to Voevodsky’s first note on the homotopy λ\lambda-calculus. The first time I looked at it, it was completely incomprehensible, it was just so far removed from anything I had even seen. Later, around 2009 I think, I took some further looks, and began to understand something, and began to program a bit in Coq, and I began to appreciate the beauty of what he was doing. I remember trying to get some people interested in it in Oxford at that time, and that point it was so new, none of the later work by people in what is now the HoTT community had appeared in print, and people’s reaction universally was that these ideas were so far out, they could not possibly understand them or think that they could be relevant any time soon. All that changed of course a few years later, when HoTT became rather fashionable. I still think, though, that Voevodsky’s first note on the homotopy λ\lambda-calculus outlines a beautiful vision, and that there are insights in it which may not be so easily gleaned from the HoTT literature.

Posted by: Richard Williamson on October 2, 2017 9:26 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Yes, I remember being I very intrigued when I heard Voevodsky was working on a homotopy λ-calculus:

Posted by: John Baez on October 2, 2017 9:43 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

On the Alg Top bulletin board, Martin Tangora gave the following link to a very interesting article by Voevodsky from 2014.

The Origins and Motivations of Univalent Foundations

Posted by: Tim Porter on October 5, 2017 6:47 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

What is going to happen with the IAS’s HoTT work? Might they get a new person to lead it?

I see that Kuen-Bang Hou (favonia) just started a NSF funded post-doc there to work with Voevodsky. Is there other HoTT work ongoing at IAS?

Posted by: RodMcGuire on October 5, 2017 4:17 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

So did Guillaume Brunerie, and Dan Grayson is visiting IAS too. I don’t know what their plans are, but at least they have each other to work with.

Posted by: Mike Shulman on October 5, 2017 6:36 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

What are some of the most important open questions in homotopy type theory? I have my own ideas, but I’d like to hear what people inside the subject have to say! With Voevodksy gone it will be important for people to set themselves some ambitious goals.

Posted by: John Baez on October 5, 2017 11:44 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

In my opinion, one of the most important open problems in HoTT is a robust theory of categorical semantics. This includes a lot of different things and ties into other open problems, such as:

  • The initiality conjecture for syntax, which Vladimir was working on.

  • The initiality conjecture as usually stated asserts a strict initiality of syntax in a category of strict categorical structures. But naturally arising higher categories are not this strict, so we need some kind of coherence theorem.

  • Most approaches to that kind of coherence theorem are “strictification” theorems saying that a higher category can be presented by some kind of strict 1-categorical structure (to which we can apply the strict universal property). But this only works in a set-theoretic metatheory. Ideally, we should be able to use HoTT itself as the metatheory, in which case strictification is unreasonable to expect, since it is not even synthetically true that every \infty-groupoid admits a surjection from a set. So we need some other kind of coherence theorem.

  • In order to even state such a theorem internal to HoTT, we need to be able to talk about higher categories inside HoTT. This involves us in the “problem of infinite objects”: the finite syntax of type theory has trouble representing infinite higher coherence structures such as (,1)(\infty,1)-categories, A A_\infty-algebras, E E_\infty-ring spectra, etc.

  • We also need a precise definition of the kinds of higher categories that appear in the semantics theorem. This leads, for instance, to considering possible definitions of “elementary \infty-topos”.

  • All of this theory should apply not just to plain Martin-Löf type theory, but to the newer homotopy type theories such as cubical type theory, directed type theory, modal/cohesive type theory, etc. There ought to be an excessively general notion of “type theory” about which we can prove such theorems once and for all and then not bother about them again, just like (for instance) the fact that any algebraic theory has free models can be (and has been) proven once and doesn’t need to be re-proven by everyone who invents a new algebraic theory.

Such a general type theory could also be implemented once and for all, so that people inventing new type theories wouldn’t have to design themselves a new proof assistant; they could just set the parameters in a new way in an existing one. Then we could formalize the general semantic theorems in that general proof assistant. I would regard that as a really satisfactory resolution of the “semantic problem”, and a “coming of age” marker for HoTT as a foundation for mathematics, analogous to a new programming language reaching the point of being able to compile its own compiler.

Posted by: Mike Shulman on October 6, 2017 6:37 AM | Permalink | Reply to this

Some other kind of coherence theorem

I had a thought that might be relevant: Even if a higher-dimensional object generally doesn’t have a set-level presentation, definable objects of any dimensionality certainly have a syntactic presentation, which is therefore set-level.

So in addition to letting HoTT eat itself, the option of using a standard interpreter suggests a general approach to/attitude toward coherence in general models of HoTT: it’s the syntax that’s coherent and presentable. Beyond that, there’s no need for presentations, provided you have an interpreter.

But maybe this was all already clear from earlier discussions about autophagy.

I had a more practical thought too, which I totally have not investigated: Rather than jamming in a standard interpreter as a primitve, it might be possible to construct one using inductive-inductive-recursive families. I know, this just sounds like the one thing that hasn’t been tried. (Or maybe it has, and I’m wrong.) But it kind of intuitively seems like the winning combination. The induction-induction part lets us handle the circularity of dependent types without trying to strictify, and the recursion part lets us turn the object language’s judgmetal equalities directly into metalanguage judgmental equalities by making them compute. My excuses for not just trying this:

  • The status of induction-recursion in homotopy models is uncertain, last I heard.
  • I’m not really an Agda user, so it’s not that convenient for me.

Maybe a homotopical interpretation of IR (actually IIR) could be a key component of an alternative to strictification.

If we discuss this, maybe it should be somewhere else than a Vladimir Voevodsky memorial post. But I don’t know.

Posted by: Matt Oliveri on October 8, 2017 10:30 AM | Permalink | Reply to this

Re: Some other kind of coherence theorem

Yes, the syntax is obviously a set-level strict structure. The point is that in the absence of a strictification-style coherence theorem for non-strict structures, we have to prove directly that the syntax is initial among non-strict structures (despite being strict itself). That’s another kind of “coherence theorem”.

Back when I was playing around with autophagy, I tried induction-(induction-)recursion too, but didn’t get anywhere. But I encourage anyone else who wants to try it!

Currently, my bet as to the best approach to this problem is to define only the normal forms as an inductive-inductive family. Intuitively, that means that you don’t need to talk explicitly about judgmental equality at all; it’s all encoded in the normalization process, which happens purely in the world of syntax before any semantics occurs. (That may mean we need to use a cubical homotopy type theory that’s fully normalizing.) In practice, when doing that for a dependent type theory you do need to talk about a limited sort of judgmental equality called “hereditary substitution”, and I believe that in order to do it inductive-inductively with only well-typed terms (which I believe is necessary to give it the right universal property to eliminate into categorical semantics) you actually need an infinite tower of hereditory substitutions and coherences for it. So the problem of infinite objects is still there, but if it can be dealt with (or punted by an appropriate choice of theorem statement) then I have strong hopes that this approach will solve the coherence problems with interpreting directly into non-strict higher categories without the need for a strictification theorem.

Posted by: Mike Shulman on October 9, 2017 6:17 AM | Permalink | Reply to this

Re: Some other kind of coherence theorem

Yes, the syntax is obviously a set-level strict structure. The point is that in the absence of a strictification-style coherence theorem for non-strict structures, we have to prove directly that the syntax is initial among non-strict structures (despite being strict itself). That’s another kind of “coherence theorem”.

If we’re lucky, maybe the dependent elimination form for the IIR families will do that.

Currently, my bet as to the best approach to this problem is to define only the normal forms as an inductive-inductive family. Intuitively, that means that you don’t need to talk explicitly about judgmental equality at all; it’s all encoded in the normalization process, which happens purely in the world of syntax before any semantics occurs.

That’s similar to what I had in mind. I was going to define all forms and a normalizer mutually. You would take care to only require equality of normal forms, but you could otherwise do the usual thing. I have a feeling that this will generalize more easily than hereditary substitution.

(That may mean we need to use a cubical homotopy type theory that’s fully normalizing.)

Well intensional type theory is strongly normalizing too. But yeah, it’s a restriction. That’s a bit mysterious. Intuitively, I figure the ability to interpret some system requires enough coherence in the metalanguage. Enough judgmental equalities. So on one hand you’d like more judgmental equalities. But for strong normalization, you should not have too many. What a nuisance! I would like it if HoTTheorists would think about what judgmental equalities they really want, for semantic reasons, and then the computer scientists can figure out the best algorithmic approach to implementing it. Maybe we should not rely on strong normalization (of the main terms).

In practice, when doing that for a dependent type theory you do need to talk about a limited sort of judgmental equality called “hereditary substitution”, and I believe that in order to do it inductive-inductively with only well-typed terms (…) you actually need an infinite tower of hereditory substitutions and coherences for it.

I don’t think you’d need that. The whole point is to make things work out judgmentally. And you’d better not need it. My collection of failed interpreters seems to indicate that infinite towers of almost anything doesn’t work. It makes you need to do an induction, and the induction hypothesis you really want is not expressible (yet).

So the problem of infinite objects is still there, but if it can be dealt with (or punted by an appropriate choice of theorem statement) then I have strong hopes that this approach will solve the coherence problems with interpreting directly into non-strict higher categories without the need for a strictification theorem.

Yay! But now I’m not sure you fully understood what I was saying. I think IIR families are sufficient to solve the infinite object problem. At least to the degree necessary to interpret small dependent type systems, and define semi-simplicial types (and probably a heck of a lot of other things).

Posted by: Matt Oliveri on October 9, 2017 9:08 AM | Permalink | Reply to this

Re: Some other kind of coherence theorem

Well intensional type theory is strongly normalizing too. But yeah, it’s a restriction. That’s a bit mysterious.

Yes, but plain intensional type theory isn’t univalent. I’m not sure whether that’ll matter; maybe it won’t.

What I find mysterious is not the idea that normalization might be useful for semantics, but the idea that an idea of such centrality to the computational side of type theory as normalization might have no relevance to the semantic side. It always puzzled my faith in the unity of mathematics. (-:

I find quite pleasing the idea that judgmental equality happens only in syntax by normalizing and comparing normal forms. It really explains to me what judgmental equality “is”, and it makes sense because in the “true” semantics of a weak (,1)(\infty,1)-category, there are no judgmental equalities at all: everything is up to homotopy.

I don’t think you’d need that. The whole point is to make things work out judgmentally.

Yes, but the point is that you can’t know that things are going to work out judgmentally while you are defining the inductive-inductive family. As part of the induction-induction, you define the graph of hereditary substitution; then afterwards you can prove that this relation is the graph of a function and that the function is judgmentally associative etc. But you need to know about associativity while defining the inductive-inductive family, i.e. before you’ve given the proof, so you need to define its graph too. Ad infinitum. The problem of infinite objects means that you can’t (obviously) do this internally with finite syntax, but if you externalize it sufficiently (or restrict the h-level) I think it should work.

I think IIR families are sufficient to solve the infinite object problem.

I don’t. (I wrote a paragraph about my reasons back at my autophagy post.) But I’d love to be proven wrong!

Posted by: Mike Shulman on October 9, 2017 4:14 PM | Permalink | Reply to this

Re: Some other kind of coherence theorem

What I find mysterious is not the idea that normalization might be useful for semantics, but the idea that an idea of such centrality to the computational side of type theory as normalization might have no relevance to the semantic side. It always puzzled my faith in the unity of mathematics. (-:

I wrote an inexcusably long draft reply warning you about the position you seem to be taking here. But then, in light of the next paragraph, I realized I don’t know what you’re really saying here. Please explain.

What I see happening is that we have a metalanguage whose judgmental equalities arise only from computation, and a plausible-seeming consequence of this is that an object language only has a standard interpretation if we can show how its judgmental equalities arise from computation.

I called this situation “mysterious”, but a better word would’ve been “ironic”. (What’s mysterious is whether it’s actually strong normalization that’s the limiting factor or something else.)

I find quite pleasing the idea that judgmental equality happens only in syntax by normalizing and comparing normal forms. It really explains to me what judgmental equality “is”…

Well technically, judgmental equalities are equalities axiomatized with judgments. Then you can consider possible roles for judgmental equality. Normalizing and comparing normal forms is usually called “definitional equality”. That’s a possible role for judgmental equality. Martin-Löf and others apparently think that’s the right role for it.

But in your autophagy post, you propose an alternative role for judgmental equality: to axiomatize a coherent system of paths. Let’s call that role “coherent equality”. I think that’s much more practical. And definitional equality and coherent equality generally don’t seem to be the same. By choosing the class of models, you can force these equalities to coincide. Here’s the important point, which got me writing too much, wanting to hammer it in: I think that would be a terrible mistake.

…and it makes sense because in the “true” semantics of a weak (∞,1)-category, there are no judgmental equalities at all: everything is up to homotopy.

This statement doesn’t make sense. Judgmental equality pertains to the judgments of a formal system. What are you really saying?

Posted by: Matt Oliveri on October 10, 2017 9:17 AM | Permalink | Reply to this

Re: Some other kind of coherence theorem

Yes, but the point is that you can’t know that things are going to work out judgmentally while you are defining the inductive-inductive family.

So I definitely have not gotten my idea across then. The whole point why I think we need IIR, not just II, is that the recursive clauses work out judgmentally, including while you’re defining it.

As part of the induction-induction, you define the graph of hereditary substitution; then afterwards you can prove that this relation is the graph of a function and that the function is judgmentally associative etc.

No. The proposal is, if you use hereditary substitution: You would implement hereditary substitution mutually, using recursive clauses.

But you need to know about associativity while defining the inductive-inductive family, i.e. before you’ve given the proof, so you need to define its graph too. Ad infinitum.

If it’s true that properties of hereditary substitution would be needed mutually as well, then it probably won’t work. But I figure the reason would be that those properties would need to be proven by induction, which would not yet be available. There are no graphs.

The problem of infinite objects means that you can’t (obviously) do this internally with finite syntax, but if you externalize it sufficiently (or restrict the h-level) I think it should work.

Yeah, there’s always that. I think having a layer that can reason about coherent equality is ultimately the best way anyway. But it would be cool if IIR lets us approximate the power of that.

I think IIR families are sufficient to solve the infinite object problem.

I don’t. (I wrote a paragraph about my reasons back at my autophagy post.) But I’d love to be proven wrong!

And I’d love to prove you wrong! …Just not enough to do it right away, and get seriously sidetracked from what I’m currently doing.

You wrote about Danielsson’s approach and McBride’s approach. I’m pretty convinced McBride’s approach doesn’t work for this, since that’s what I tried. What I’m proposing now is more like Danielsson’s.

I guess you’re saying the definition turns out to be harder than I naively hoped, and it messes up the elimination form. I should look into that. If you have any more thoughts on what goes wrong with Danielsson’s, they might be helpful for me. For example, I don’t see at all why you’d need to know that the IIR types are h-sets during the definition.


After thinking some more, I realized the two ideas are not unrelated: the graph of hereditary substitution is essentially the natural way to encode an inductive-recursive normalization function using induction-induction instead.

Well I’d say the graph of hereditary substitution is the natural way to inductively encode hereditary substitution. To say it’s the natural way to encode a normalization function seems to say that hereditary substitution is the natural way to do normalization, which I don’t agree with.

If it happens that induction-recursion has some extra automatic strictness that enables you to interpret it into the universe without an infinite tower of anything, then I would become rather more dubious that induction-recursion makes sense homotopically.

Why? It’s letting us express something we already believe to exist. (In intended models.) Probably it’s more powerful than just assuming a standard interpreter, but I figured the extra power you get would be natural. You don’t have to think of IR as having extra strictness, but as having extra coherence. Just like any other time we require things to work out definitionally. IIR would hopefully let us access the “native” coherence of the universe.

Maybe the problem is that you’re already uncomfortable with a standard interpreter that works out definitionally? (That would be very disappointing.)

Also note that we don’t only want to interpret into “the” universe but into any sufficiently structured (∞,1)-category, and the latter may not be as strict as the universe.

Well, that kind of interpretation could not be standard, it seems. So that’s a different problem, which we might solve by using a standard interpreter to define a tower of coherences. (As you pointed out long ago.)

Posted by: Matt Oliveri on October 10, 2017 11:10 AM | Permalink | Reply to this

Re: Some other kind of coherence theorem

I guess you’re saying the definition turns out to be harder than I naively hoped, and it messes up the elimination form. I should look into that.

Ugh. Yeah, I give up.

Posted by: Matt Oliveri on October 11, 2017 8:31 PM | Permalink | Reply to this

Re: Some other kind of coherence theorem

Sorry, I was using “judgmental equality” indiscriminately to refer to all kinds of strict equality. Bad habit, I know. But part of the point I was making is that in my proposal, they would all be the same.

It looks like I also wasn’t clear about when I was talking about my approach versus yours; although I used the generic “you” I was still talking about the inductive-inductive version, with the graph and higher graphs of hereditary substitution.

In any case, I think we’ve taken over too much of this comments thread already with this. And as we discovered last time, arguing about intuitions is of limited usefulness: eventually someone has to go prove something to either convince or be convinced. But if you do want to talk about this more, let’s switch to private email.

Posted by: Mike Shulman on October 10, 2017 8:28 PM | Permalink | Reply to this

Re: Some other kind of coherence theorem

After thinking some more, I realized the two ideas are not unrelated: the graph of hereditary substitution is essentially the natural way to encode an inductive-recursive normalization function using induction-induction instead.

If it happens that induction-recursion has some extra automatic strictness that enables you to interpret it into the universe without an infinite tower of anything, then I would become rather more dubious that induction-recursion makes sense homotopically. Also note that we don’t only want to interpret into “the” universe but into any sufficiently structured (,1)(\infty,1)-category, and the latter may not be as strict as the universe.

Posted by: Mike Shulman on October 9, 2017 7:29 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

At some level (problem 3? problem 5?) it seems like a version of the homotopy hypothesis appears here.

Posted by: Gershom Bazerman on October 9, 2017 12:34 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Mike already described very eloquently a main family of open problems related to (categorical) semantics and internal (,1)(\infty,1)-category theory.

There’s a list of open problems.

More generally, we also need to develop new compelling applications of HoTT that would be of interest to a wide audience of mathematicians. That could be new results in homotopy theory or general results about (,1)(\infty,1)-toposes (such as the generalized Blakers-Massey theorem), or developments in particular (,1)(\infty,1)-toposes such as for real or smooth cohesion or perhaps something related to algebraic geometry.

Posted by: Ulrik Buchholtz on October 6, 2017 9:27 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Why do several of these open problems mention semi-simplicial objects rather than simplicial objects?

Posted by: John Baez on October 9, 2017 1:30 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

As I understand it, as per the discussion from the special year, it is believed that semi-simplicial objects should be easier to construct through an inductive procedure.

Furthermore, because of the adjoint between semi-simplicial and simplicial structures, then as discussied in the nlab page on category objects in infinity categories, that should suffice for the internal construction of infinity categories, etc.

So, “it appears to be simpler” and “it should be enough”. And, I suppose, a sense that “if you can’t at least do that, then you have no hope for simplicial objects either, anyway.”

I suspect (but may be wrong) the other issue mentioned on the IAS page, regarding constructivity, is a bit of a red herring, as there’s no reason (that I know of) to imagine that working with semi-simplicial objects is any less classical as a whole. But it is worth pointing out the more complete paper that extends the results of the linked result on constructivity, just as a very nice and readable exposition.

Posted by: Gershom Bazerman on October 9, 2017 6:15 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Because simplicial objects are even harder! A semi-simplicial object can be expressed purely in terms of type dependency, ensuring that the defining semi-simplicial identities hold strictly; the only problem (though it’s a doozy) is dealing with the infinite tower of dependent types involved. In simplicial type the degeneracies have to satisfy their own identities, and type dependency doesn’t suffice to ensure that those hold strictly; so you either need something like a 2-level theory to assert such a strict equality, or to assert them as homotopies but then add all the necessary higher coherences.

Posted by: Mike Shulman on October 9, 2017 6:05 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

I agree with Ulrik that there are lots of areas where further development is needed and that are not at all far out of reach. The initiality of syntax is a technical, internal concern to specialists, but applications of HoTT and UniMath to other areas of math don’t require it. For instance, Floris van Doorn (working with others and following a suggestion of Mike’s) has just completed a formalization of the Serre spectral sequence in HoTT-Lean that should open up new applications in cohomology, and gives another example of how much is possible in synthetic HoTT.

Posted by: Steve Awodey on October 21, 2017 4:31 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Not specifically with regard to HoTT, there remain major challenges with reaching Voevodsky’s original goal of having practical software for formal proof. Whilst HoTT has contributed many interesting things to mathematics and type theory, I am not convinced that it has helped very much at all with this practical goal, except in the sociological sense of bringing attention it.

Even if one’s proof is algorithmic in nature and uses only simple objects, so is easily describable on a computer, it is still highly impractical to formalise its correctness. This is very frustrating, because this seems quite do-able. What we probably need is a new kind of higher level syntax (and compiler for it), of equivalent utility as the for loops and if blocks of today’s programming languages. It may well be that we should not look towards type theory for inspiration here; probably some new kind of language is needed to reason about the higher level syntax.

For some subjects, like geometric topology and knot theory, one would need not only a higher level syntax, but a new kind of compiler which is able to translate geometric statements (‘move this arc of the knot over here’) into some concrete lower-level steps. Ideally, one would also have some kind of 2d or even 3d syntax.

Posted by: Richard Williamson on October 6, 2017 9:46 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Richard said:

For some subjects, like geometric topology and knot theory, one would need not only a higher level syntax, but a new kind of compiler which is able to translate geometric statements (‘move this arc of the knot over here’) into some concrete lower-level steps. Ideally, one would also have some kind of 2d or even 3d syntax.

You mean one would need something like Globular?

Posted by: Simon Willerton on October 6, 2017 3:06 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

You mean one would need something like Globular?

Exactly that kind of thing, yes! I think Globular is a very beautiful development.

Basically in Globular one can only construct correct proofs, i.e. what can do translates directly into a proof step (i.e. manipulation in the underlying semistrict higher category). This is exactly how it should be. One also of course needs to verify that this process is bug-free, but if it is, then one has a formal proof. The challenge is to do something like Globular in general. Or it doesn’t actually have to be done completely in general, but if one wishes to suddenly combine two ‘modules’, one is going to need some way for them to understand one another. For instance, my proof might involve some diagrammatic manipulations that Globular can handle, but then there might be some arguments involving a fundamental group as well. How can we merge the ‘group theory’ module with the ‘Globular’ module? Here is maybe where we need some new higher level language/syntax most.

Posted by: Richard Williamson on October 6, 2017 4:00 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

In principle, it ought to be possible for Globular to “compile” its proofs into proof terms for (any implementation of) HoTT. Of course, that would requiring actually proving the semistrictification coherence theorem, i.e. that Globular’s semistrict nn-categories really are justified.

Posted by: Mike Shulman on October 6, 2017 5:41 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Mike Stay has pointed out an interesting interview of Voevodsky, in Russian:

Here’s a bit of the first part in English, with a lot of help from Google Translate:

Question: The most famous result of yours is the solution of Milnor’s problem of K-functors of fields. You got it back in ‘96. And what happened next? How did your scientific interests evolve in the following years?

Answer: First, it was necessary to prove the generalization of the Milnor hypothesis, now known as the Bloch-Kato conjecture. The basic idea of ​​this proof I formulated at the end of 96, at about the same time, when I wrote the first complete version of the proof of Milnor’s conjecture. In the approach that I came up with for the proof of Bloch-Kato, there were several problems. First, it depended on some “sub-hypothesis”, which was a generalization of a single result of Marcus Rost. Secondly, from the development of much more advanced concepts in the motivic homotopy theory than those that were sufficient to prove the hypothesis of Milnor. It was clear that Marcus could most likely finish the first one, and the second one I would have to do. As a result, the first was completed in, in my opinion, 2007 or 2008 by Suslin, Zhukhovitsky and Weibel, based on Marcus’s sketches. And I finished all the preliminary work and the proof itself only in February 2010.

It was very difficult. In fact, it was 10 years of technical work on a topic that did not interest me during the last 5 of these 10 years. Everything was done only through willpower.

Since the autumn of 1997, I already understood that my main contribution to the theory of motives and motivational cohomology was made. Since that time I have been very conscious and actively looking for. I was looking for a topic that I will deal with when I fulfill my obligations related to the Bloch-Kato hypothesis. I quickly realized that if I wanted to do something really serious, then I should make the most of my accumulated knowledge and skills in mathematics. On the other hand, seeing the trends in the development of mathematics as a science, I realized that the time is coming when the proof of yet another hypothesis will change little. That mathematics is on the verge of a crisis, or rather, two crises. The first is connected with the separation of “pure” and applied mathematics. It is clear that sooner or later there will be a question about why society should pay money to people who are engaged in things that do not have any practical applications. The second, less obvious, is connected with the complication of pure mathematics, which leads to the fact that, sooner or later, the articles will become too complicated for detailed verification and the process of accumulating undetected errors will begin. And since mathematics is a very deep science, in the sense that the results of one article usually depend on the results of many and many previous articles, this accumulation of errors for mathematics is very dangerous.

So, I decided, you need to try to do something that will help prevent these crises. In the first case, this meant that it was necessary to find an applied problem that required for its solution the methods of pure mathematics developed in recent years or even decades.

Since childhood I have been interested in natural sciences (physics, chemistry, biology), as well as in the theory of computer languages, and since 1997, I have read a lot on these topics, and even took several student and post-graduate courses. In fact, I “updated” and deepened the knowledge that had to a very large extent. All this time I was looking for that I recognized open problems that would be of interest to me and to which I could apply modern mathematics.

As a result, I chose, I now understand incorrectly, the problem of recovering the history of populations from their modern genetic composition. I took on this task for a total of about two years, and in the end, already by 2009, I realized that what I was inventing was useless. In my life, so far, it was, perhaps, the greatest scientific failure. A lot of work was invested in the project, which completely failed. Of course, there was some benefit, of course — I learned a lot of probability theory, which I knew badly, and also learned a lot about demography and demographic history.

In parallel, I was looking for approaches to the problem of accumulating errors in works on pure mathematics. It was clear that the only solution is to create a language in which mathematical proof can be written by people in such a form that it can be checked on a computer. Up until 2005, it seemed to me that this task is much more complicated than the task of historical genetics, which I was engaged in. In many respects this feeling was the result of an established and very widespread among mathematicians opinion that abstract mathematics can not be reasonably formalized so accurately that it is “understood” by the computer.

In 2005, I managed to formulate several ideas that unexpectedly opened up a new approach to one of the main problems in the foundations of modern mathematics. This problem can be formally formulated as a question of how to correctly formalize the intuitive understanding that “identical” mathematical objects have the same properties. Arguments based on this principle are very often used in modern mathematical proofs, but the existing foundations of mathematics (Zermelo-Fraenkel set theory) are completely unsuited for the formalization of such arguments.

I was very familiar with this problem and thought about it back in 1989, when Misha Kapranov and I worked on the theory of nn-categories. Then it seemed to us that it was impossible to solve it. What I was able to understand in 2005, combining the ideas of homotopy theory (parts of modern topology) and type theory (parts of the modern theory of programming languages) was absolutely amazing, and opened up real possibilities for constructing the very language in which people can write proofs, which can check the computer. Further there was a big break in my mathematical activity. From June 2006 to November 2007 I did not do math at all. What happened during this period, we will discuss in another part of the interview. Now, thinking about that happening to me at this time, I often recall the story by A. and B. Strugatsky: A Billion Years Before the End of the World. I returned to mathematics at the end of 2007. I worked first at intervals, then on ideas related to historical genetics, then at the end of the cycle of work on the proof of the Bloch-Kato hypothesis. I returned to the ideas connected with the computer verification of evidence only in the summer of 2009, when it became finally clear to me that with historical genetics nothing happens. And just a few months later there were two events that advanced these ideas from general hints, which I thought I would have to work on for more than a year, to the stage at which I was able to state that I had come up with new foundations of mathematics that would solve the problem computer verification of evidence. Now this is called “univalent foundations of mathematics”, and it’s studied by both mathematicians and theorists of programming languages. I have almost no doubt that these foundations will soon replace the theory of sets and that the problem of the language of abstract mathematics that will be “understood” by computers can be considered basically solved.

Posted by: John Baez on October 6, 2017 12:03 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

I am a Russian speaker and have translated the interview here, with comments enabled.

Posted by: Innokentij Zotov on October 15, 2017 3:05 PM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

Here’s another interesting part of the interview:

Answer: It came to me how little our science really explains when I was about 35 years old, approximately in 2001. At the time I did not connect it with the fact that in the 20th century science excluded from the field of its attention what is now called “supernatural.” I still treated everything mystical-religious as a deception or delusion. At this position, I stood very firmly until 2007. The period from 2001 to 2006 was very difficult. For several years I was only saved by the fact that I was engaged in wilelife photography. Some of my photos of that period can be found here: http://pics.livejournal.com/vividha/

Question. It happens that people of search become after contact with something that does not fit into their old understanding, into the familiar picture of the world. For example, they say that Gurdjieff as a child witnessed a ritual act, in which children outlined the circle around the Yezidi boy, and he could not escape from this circle. Struck by what he saw as supernatural, as well as human cruelty, Gurdjieff began to seek new knowledge about the world and man. Did you have any points, events, inexplicable phenomena that triggered a rethinking?

Answer. In 2006-2007 a lot of external and internal events happened to me, after which my point of view on the questions of the “supernatural” changed significantly. What happened to me during these years, perhaps, can be compared most closely to what happened to Karl Jung in 1913-14. Jung called it “confrontation with the unconscious”. I do not know what to call it, but I can describe it in a few words. Remaining more or less normal, apart from the fact that I was trying to discuss what was happening to me with people whom I should not have discussed it with, I had in a few months acquired a very considerable experience of visions, voices, periods when parts of my body did not obey me, and a lot of incredible accidents. The most intense period was in mid-April 2007 when I spent 9 days (7 of them in the Mormon capital of Salt Lake City), never falling asleep for all these days.

Almost from the very beginning, I found that many of these phenomena (voices, visions, various sensory hallucinations), I could control. So I was not scared and did not feel sick, but perceived everything as something very interesting, actively trying to interact with those “beings” in the auditorial, visual and then tactile spaces that appeared around me (by themselves or by invoking them). I must say, probably, to avoid possible speculations on this subject, that I did not use any drugs during this period, tried to eat and sleep a lot, and drank diluted white wine.

Another comment: when I say “beings”, naturally I mean what in modern terminology are called complex hallucinations. The word “beings” emphasizes that these hallucinations themselves “behaved”, possessed a memory independent of my memory, and reacted to attempts at communication. In addition, they were often perceived in concert in various sensory modalities. For example, I played several times with a (hallucinated) ball with a (hallucinated) girl — and I saw this ball, and felt it with my palm when I threw it.

Despite the fact that all this was very interesting, it was very difficult. It happened for several periods, the longest of which lasted from September 2007 to February 2008 without breaks. There were days when I could not read, and days when coordination of movements was broken to such an extent that it was difficult to walk.

I managed to get out of this state due to the fact that I forced myself to start math again. By the middle of spring 2008 I could already function more or less normally and even went to Salt Lake City to look at the places where I wandered, not knowing where I was in the spring of 2007.

It should be said that despite many conversations with non-material “beings” during this period, I completely did not understand what actually happened. I was “offered” many explanations, including hypnotists, aliens, demons and secret communities of people with magical abilities. None of the explanations explained everything I observed. Eventually, since some terminology was needed in conversations, I began to call all these beings spirits, although now I think that this terminology is not true. The terms “world system” (apparently control over people) and, especially in the beginning, “the game hosted by fear” sounded in this context.

After I returned to a more or less normal state, and in particular I could read serious books again, I began to study very actively those areas of knowledge that I had previously ignored. First of all, I began to try to find descriptions of similar events that occurred with other people. I must say that it was not possible for me (not counting Jung). Something a little bit similar, but without visions, happened to Karen Amstrong, who later began to write books about different religions. There were many descriptions of how people experienced visions, voices, unusual emotional states , etc. in the course of hours or days (“mystical experience”). As a rule, it either strengthened them in the religion in which they grew up or made them religious. A classic and very interesting example, when events of this kind continued a long time is Swedenborg. In my case, however, it worked differently — Swedenborg quickly accepted what was happening to him as coming from God, and after that the process was completely different. Perhaps the most interesting thing for me was the story of the “confrontation with the unconscious” of Carl Jung, but there the situation was also different because Jung, unlike me, came across “supernatural” events since his childhood and believed in God.

Posted by: John Baez on October 6, 2017 12:19 AM | Permalink | Reply to this

Re: Vladimir Voevodsky, June 4, 1966 - September 30, 2017

As someone with a long-standing amateur interest in Jung, this is very interesting to me, and I read in his description the terrible aloneness he must have felt (e.g. where he badly regrets discussing his experiences with others). As Jung himself undoubtedly felt during his period of “Confrontation with the Unconscious” and where his sanity was very much at stake, although he at least had a few confidantes such as Toni Wolff.

I wonder whether Voevodsky made any attempt to discuss these matters with a Jungian analyst. The term “active imagination” has been used to discuss a general category of similar such unusual experiences, and a lot of analytical work goes into interpreting and integrating these experiences, and in successful cases one may discover the essential humanity underlying them, aspects of humanity which for most people remains dormant. It doesn’t have to involve God either. Wolfgang Pauli had a series of unusual and stressful experiences and dreams sometime in the early 1930’s and sought help from Jung (I don’t think Jung was his direct analyst, but CGJ was following his case with interest, which later became the basis for a great deal of his book Psychology and Alchemy), and I believe was helped immensely by the analysis, but I don’t think Pauli ever brought in God as part of the picture when he referred to it.

In Jung’s case, despite the terrible isolation he must have felt at times, he later felt a connection with and integration into a long line of history, especially through his psychological study of alchemy. It may be terribly presumptuous of me to speculate, but it may have been possible for Voevodsky to resolve and integrate those experiences with the help of a sufficiently sensitive Jungian analyst (who call themselves ‘analytical psychologists’). I understand that Mitchell Feigenbaum also got a lot of practical help and advice through this type of analysis, although I don’t know that he reported any unusual spiritual experiences.

Posted by: Todd Trimble on October 7, 2017 10:06 AM | Permalink | Reply to this

Post a New Comment