Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

August 8, 2016

What is a Formal Proof?

Posted by Mike Shulman

There’s been some discussion recently in the homotopy type theory community about questions like “must type-checking always be decidable?” While the specific phrasing of this question is specific to type theory (and somewhat technical as well), it is really a manifestation of a deeper and more general question: what is a formal proof?

At one level, the answer to this question is a matter of definition: any particular foundational system for mathematics defines what it considers to be a “formal proof”. However, the current discussions are motivated by questions in the design of foundational systems, so this is not the relevant answer. Instead the question is what properties should a notion of “formal proof” satisfy for it to be worthy of the name?

To start with let me emphasize that whenever I say a “proof” I will mean a correct proof. In addition to defining a notion of (correct) formal proof, a foundational system often defines some class of “arguments” that may or may not be correct; but for now, let’s just consider the correct proofs.

I’m inclined to claim that the real sine qua non of a notion of “formal proof” is a soundness theorem. That means that if we can prove CC (in some formal system) under hypotheses AA and BB, say, and we know (externally to the formal system) that AA and BB are true, we ought to be able to conclude (again externally) that CC is also true. For if a proof doesn’t guarantee the truth of its conclusion, what good is it to prove anything?

To be sure, categorical logicians want more than a simple soundness theorem that refers to “truth”: we want to be able to interpret proofs in arbitrary sufficiently structured categories. More precisely, proofs should provide a means of constructing an initial (or free) category of some sort, which we can map uniquely into any other such category 𝒞\mathcal{C} to interpret proofs in terms of objects and morphisms in 𝒞\mathcal{C}. An ordinary soundness theorem is just the special case of this when 𝒞\mathcal{C} is a category that we consider “the real world”, such as a category of sets. We might also want to have a dual “completeness theorem” that everything true is provable, in some sense. However, while those are all nice, without at least a simple soundness theorem I think it’s hard to justify calling something a “proof”.

Now, how do we prove a soundness theorem? In principle, I’m willing to be open-minded about this, but the only way I’ve ever seen to do it is by induction. That is, a formal proof is (or gives rise to something that is) inductively constructed by some collection of rules, and we prove soundness by proving that each of these rules “preserves truth”, so that when we put a bunch of them together into a proof, truth is still preserved all the way through.

For example, one common rule (called “or-elimination”) says that if we can prove “AA or BB”, and assuming AA we can prove CC, and also assuming BB we can prove CC, then we can deduce CC without assuming anything extra. This rule is sound under the ordinary meaning of “or”, in the following sense: assuming inductively that the three premises are sound, we conclude that (1) “AA or BB” is true, that (1) if AA is true then so is CC, and that (3) if BB is true then so is CC. By (1), it must be that either AA is true or BB is true; in the first case, we deduce that CC is true from (2); while in the second case we deduce that CC is true from (3). Thus, in all cases CC must be true. This is one of the “inductive steps” in a proof by “structural induction” that all proofs preserve truth, so that the formal system is sound.

(I hesitated before including this example at all, because it looks so tautological that one feels as if nothing is happening. The point is that it’s shifting the proof from the object-theory to the meta-theory: the rule of proof in the object-theory is sound because it represents a pattern of reasoning that’s correct in the meta-theory. Rest assured that it becomes much less trivial for more complicated systems like dependent type theory, and also when we want to interpret proofs into arbitrary categories.)

So in conclusion, it seems to me (at the moment) that any notion of “formal proof” worthy of the name must be (or include, or give rise to) some sort of inductively defined structure that we can use to prove a soundness theorem. In type theory, these inductively defined structures are called derivation trees.

Now, as I mentioned above, many formal systems also include some notion of “argument” that might or might not be a proof. Indeed, I’m tempted to claim that it’s impossible to avoid dealing with this at some point. Of course, ordinary mathematics written on paper is not a formal proof in any formal system; but even if we tried (totally infeasibly) to always write complete formal proofs on paper, there would always be the possibility (because “paper is untyped”) that we mis-applied a rule somewhere. One might say that formal proof is a mathematical abstraction that can exist essentially nowhere in reality.

The question of “type-checking” is about how we get from an argument to a proof. Of course, this depends on what kind of argument we are talking about! For instance, most type theories include a notion of “term” that plays the role of a kind of argument. Terms are, roughly, one-dimensional syntactic representations of derivation trees (or parts of them); but rather than directly being defined inductively as derivation trees are, the “well-typed terms” (those that represent derivation trees) are singled out from a larger class of “untyped terms”. Thus the latter are a sort of “argument” that might or might not give rise to a proof.

For instance, a proof by “or-elimination” as above would be represented by a term like case(M,u.P,v.Q)case(M,u.P,v.Q), where MM is a proof of “AA or BB”, PP is a proof of CC that gets to use a hypothesis uu of AA, and QQ is a proof of CC that gets to use a hypothesis of BB. This is an expression of the sort that could be typed into a computer proof assistant; it’s formally analogous to arithmetic expressions like x+(y*z)x+(y*z) that you can write in a Java or Python program. But in addition to the problems of potential typos and misspellings, we can put together strings of symbols that “look like terms” but don’t actually represent proofs; for instance, maybe we write case(M,u.P,v.Q)case(M,u.P,v.Q) where MM is not actually a proof of “AA or BB” for any AA and BB (maybe it is a proof of “if AA then BB”), or where PP and QQ are not proofs of the same conclusion CC. In a statically typed programming language, this sort of thing produces a compiler error; that’s basically the same sort of thing as the type-checking of a proof assistant.

The specific question “should type-checking be decidable” is about whether there should be an algorithm to which you can hand an untyped term and be guaranteed to get an answer of either “yes, this represents a derivation tree” or “no, it doesn’t” in a finite amount of time. In other words, your compiler can never hang; it must either succeed or return with an error message. But from the perspective I’m advocating here, “decidable checking” is not a fundamental property of a formal system, or more precisely not a property of the proofs in that system. Rather, it is a property of a certain class of “arguments” that might or might not represent proofs.

In particular, although many type theories have decidable type-checking for terms, essentially all practical type theories also include other kinds of “argument” that do not have “decidable checking”. For instance, practical implementations of dependent type theory (such as Coq, Agda, and Lean) never force the user to write out complete terms (let alone derivation trees); instead they have powerful “elaborators” that can fill in implicit arguments using techniques such as unification and typeclass inference. These elaborators are generally not guaranteed to terminate in general; for instance, it’s quite possible to set up a loop in typeclass inference causing Coq to hang, and “higher-order unification” is known formally to be an undecidable problem.

And this is even before we get to the informal arguments found in pencil-and-paper mathematics. Converting those into a formal proof of any sort is certainly undecidable by any ealgorithm!

So we certainly can’t do without “notions of argument” that don’t have “decidable type-checking”. But what I would ask of designers of new formal systems whether there is, somewhere in the specification of the system, a (probably inductive) notion of proof for which one can prove a soundness theorem (and, hopefully, also a categorical initiality theorem, and maybe a completeness theorem). If not, how do you justify calling the things you are talking about “proofs”?

Note that “type-checking” is trivially “decidable” for proofs of this sort; by their very nature they are correct proofs. The further our “arguments” get from the inductive formal proofs, the more difficult of a problem “type-checking” becomes, until in the limit we get to “I have found a truly marvelous proof of this result which this margin is too narrow to contain”. Somewhere in there is the boundary between decidable and undecidable type-checking. Somewhere else in there is the boundary between feasible and infeasible type-checking. And in practice, we certainly make use of “notions of argument” that lie on both sides of each of those lines.

However, it does seem to me that if a formal system is going to have at its core some inductive notion of proof, then for a proof assistant to honestly call itself an implementation of that formal system, it ought to include, somewhere in its internals, some data structure that represents those proofs reasonably faithfully. And given how trivial “type-checking” is for the actual formal proofs, it seems to me that anything calling itself a “reasonably faithful representation” of those proofs ought to at least have decidable type-checking. Those representations may not be what the user of the system calls “terms”; but they ought to be there somewhere. Informally, the purpose of a proof assistant is to assist the user in producing a proof; it may not actually go all the way to produce a complete inductive formal proof, but it ought to at least produce something close enough that the rest of the distance can be crossed algorithmically.

However, I remain open to being convinced otherwise.

EDIT: After a long and probably hard-to-follow discussion in the comments, I have been convinced otherwise (though I remain open to being convinced back). I still say that a proof assistant ought to somehow “faithfully represent formal proofs” internally. But now I have an actual mathematical/practical reason for that (not just a philosophical one), which implies a concrete criterion for what “faithfully represent” means: I want to be able to compute with the formal proofs that get created, such as by applying a constructive proof of the soundness/initiality theorem to them. With this criterion, it turns out that decidability is a red herring. What I want is more than decidability in one sense — we need the actual proofs, not just a decidably checkable representation of them — but also less in another sense, since we don’t need to actually represent the entire proof as a data structure, only “compute with that data structure” as if we had it. See this comment below and its responses for further discussion.

Posted at August 8, 2016 9:06 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2893

114 Comments & 0 Trackbacks

Re: What is a Formal Proof?

So we certainly can’t do without “notions of argument” that don’t have “decidable type-checking”.

I’m not sure what you mean by this sentence. I thought you were going to conclude the opposite position, or perhaps I can’t parse the triple negative correctly.

Posted by: David Roberts on August 9, 2016 9:07 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Sorry about the triple negative. (-: “Can’t do without” means “we must have”, and that’s what I was saying in the previous paragraphs: we always have to deal with notions of argument for which proof-checking is not decidable.

Posted by: Mike Shulman on August 9, 2016 4:26 PM | Permalink | Reply to this

Re: What is a Formal Proof?

And I thought Mike was, like, into intuitionism.

Posted by: John Baez on August 10, 2016 7:46 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Surely only a semi-decidable type check is necessary?

Posted by: Gen Zhang on August 9, 2016 11:41 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Why?

Notice that I’m not arguing that there’s anything intrinsic to decidability (versus, say, feasibility) that’s important. I’m saying that there ought to be a faithful representation of complete inductive formal proofs, and decidability is some kind of (necessary but not sufficient) precise indicator of such “closeness”.

Posted by: Mike Shulman on August 9, 2016 4:28 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Not withstanding Andrej’s point below about implicit versus explicit proofs, even with an explicit proof, I would be happy with a checker that, if it terminates verifies that the proof is sound and fails to terminate otherwise. In practice, I would actually only be happy with a proof that can be type checked in some definite finite time like a week!

Posted by: Gen Zhang on August 10, 2016 8:18 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I started writig a reply to your very nice post, and it turned into a blog post of mine. Sorry for reusing the title…

My main point is that I disagree with your requirement that complete proof terms, or near-approximations of them, should be stored in memory. Nothing is gained by such a requirement. Presumably you want verifiability and replicability of proofs, but neither of those requires that actual proofs be explicitly stored. Instead, we need efficient ways to recreate proof terms, given suitable instructions on how to do it.

Posted by: Andrej Bauer on August 9, 2016 1:30 PM | Permalink | Reply to this

Re: What is a Formal Proof?

One consequence of this is that decidability of proof terms is irrelevant, since we never actually check complete proof terms. We only ever implicitly create them using a trusted tool.

Posted by: Andrej Bauer on August 9, 2016 1:33 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Thanks for the nice detailed reply! I actually agree with all your points about practical behavior of proof assistants, and in particular the illusion of .vo files. My issue is a purely philosophico-mathematical one: if a proof assistant is assisting me to create a proof, shouldn’t it actually create a proof? I don’t care whether that proof is stored as a “certificate” or just deleted as soon as it’s made.

You say “we need efficient ways to recreate proof terms”, which seems to also imply that we’re creating them in the first place. As long as we’re doing that — for a meaning of “term” that is as close as possible to an actual proof — I’m happy. As I said to Gen, I’m not putting decidability on a pillar for any virtues that it itself possesses; it’s just a rough metric that seems reasonable to me to approximate the meaning of “as close as possible”. There are other criteria one could add; e.g. if the decidability check is totally computationally infeasible, one might argue that that also means the terms are not really sufficiently close to the proofs.

Posted by: Mike Shulman on August 9, 2016 6:19 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Of course, we create detailed proofs with proof assistants. They are (implicit in) the execution traces of the proof checker. We can have a meta-theorem which says that a proof tree can be reconstructed from the execution trace and that is all we ever need.

Why do you feel safer by having the proof tree stored in memory rather than have it implicit in the execution trace? It’s too big for you to look at, so you have to trust another tool to verify it. You could argue that the verifier is simpler than the program which constructed the proof tree, but this ain’t so in HOL and Andromeda, whose trusted core is way smaller than that of Coq. And I won’t even discuss Agda. So we have another ideal here about small verifiers that does not materialize in practice.

Posted by: Andrej Bauer on August 9, 2016 9:21 PM | Permalink | Reply to this

Re: What is a Formal Proof?

From my experience working in the (much simpler) areas of boolean satisfiability and combinatorics, I would not be comfortable trusting a computer proof until it has gone through at least two different verifiers, implemented entirely independently by two different programming teams who have not seen each other’s algorithms or code… In that respect, being able to store the proof on disk somewhere and ship it around would be rather handy.

Posted by: Ciaran McCreesh on August 9, 2016 10:20 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I wonder whether this is a case where complexity actually makes things easier? I’m pretty willing to trust a proof that’s been formalized in one proof assistant. Proof assistants do contain bugs, of course, but paradoxically, because of the complexity of the system and the mathematics being formalized in it, I think it’s pretty rare that a bug in the system would match up “just so” with a bug in the proof to enable it to slip by.

The only case I can think of where something even vaguely like this has happened is when we combined Coq’s experimental universe polymorphism with the hacky way to “implement” higher inductive types using private inductives and didn’t notice at first that some universes weren’t getting tracked correctly; and that seems like a pretty unusual situation. I suspect a slightly more common error is when a formalization doesn’t actually formalize what it claims to formalize, but only a human reader can be expected to catch that.

In fact, with current technology it’s basically impossible for any proof assistant to output something that can be checked by a different proof assistant. It would be neat if there were a sort of “standardized language” for (say) type-theory-based proof assistants so that they could check each other’s proof terms; but it would be tricky to manage because no two of them implement exactly the same underlying type theory.

Posted by: Mike Shulman on August 10, 2016 7:13 AM | Permalink | Reply to this

Re: What is a Formal Proof?

There is Open Theory for the HOL family. It would indeed be nice to have such a standard for type theory too. It used to be the case that Matita could read .vo files.

Posted by: Bas Spitters on August 10, 2016 10:05 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m confused. If Andromeda doesn’t actually store a proof to be verified by another tool, then how can its verifier be smaller than that of Coq if it doesn’t have a verifier at all?

Posted by: Mike Shulman on August 10, 2016 7:01 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m confused. If Andromeda doesn’t actually store a proof to be verified by another tool, then how can its verifier be smaller than that of Coq if it doesn’t have a verifier at all?

This was a question to Andrej about his use of the word “verifier”, but here’s what I think he meant, along with a bunch of relevant background:

The “verifier” in an LCF-style system checks the validity of proof steps as they are performed. In the case of Andromeda, this also involves building a term. One might be tempted to call it a “proof term”, but that’s misleading because it’s not checked or even feasibly checkable. It’s well-typed by construction. The proof steps are performed by Andromeda MetaLanguage (AML).

Formal systems that don’t use propositions-as-types make the LCF approach less subtle: terms are typechecked, proofs need not ever be built, but are known to exist because the steps were successful.

For Andromeda, it’s best to think of proof as the process of establishing the existence of a derivation of a judgment. It’s judgments that get proven (because they’re synthetic, not analytic), not types. Types have “evidence” (their inhabitants) but one cannot feasibly recognize evidence without proof that it has the expected type.

In systems with analytic judgments, you have proof terms, so an alternative to the LCF approach is to typecheck proof terms. This is what Coq and Agda do. Don’t be fooled into thinking Coq is LCF-style because it has tactics! It’s about the design of the trusted kernel. Coq’s kernel is a trusted typechecker. Andromeda’s “nucleus” is a trusted implementation of the primitive tactics. (Invoked from AML.)

Now why might an LCF kernel be simpler than a typechecking kernel? Probably because a rather LCF-ish kernel is a part of a typechecker! A typechecker will traverse a proof term, performing checks as it goes. These checks are exactly the checks that would be performed by trustworthy tactics. The beauty of LCF-style is the recognition that only the steps need to be checked. How they’re “fed into” the computer is irrelevant. There’s no technical need for proof terms, which merely script the proof steps in the least imaginative possible way.

I oversimplify: The reason switching Coq to an LCF-style kernel would actually probably not be so simple is termination checking. It’s a non-compositional check, so you cannot check the term in one pass, like you’d have to if proof terms were literally degenerate proof scripts. I see this as one bad idea chasing another. You wouldn’t need termination checking in the kernel if you could prove termination without gunking up the term. Woe is ITT. Or, if you’re happy with gunky terms, you could ask for an LCF-style system for plain ITT (without termination checking).

So to answer your question, the real reason why Coq’s kernel is larger than that of HOL Light or Andromeda is simply that Gallina is a dramatically more complicated language than HOL or extensional Type:Type. ;)

Posted by: Matt Oliveri on August 10, 2016 7:46 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’ve never been able to figure out what people mean by “analytic” or “synthetic” judgments. The way that makes the most sense to me to think of type theory is that the basic rules give an inductive presentation of the derivations of judgments, not involving any terms. That is, we derive a judgment like A,BA×BA,B\vdash A\times B using the pairing operation, and so on. Semantically, each judgment is a “kind of object or morphism” in some categorical structure, and each derivation of that judgment represents a particular object or morphism. Now those derivation trees are generally too unwieldy to work with directly, so we annotate them by variables and terms in such a way that the entire tree can be reconstructed from the term that it assigns to the root, like x:A,y:Bx,y:A×Bx:A, y:B \vdash \langle x,y\rangle :A\times B. But the fundamental object is the un-annotated derivation tree, which is what I think we should also call a “proof”. All type theories that I’m familiar with can be presented in this way. What is it about a judgment that makes you call it “analytic” or “synthetic”?

(I’m still hoping to hear from Andrej too.)

Posted by: Mike Shulman on August 10, 2016 9:22 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Do you know Martin-Löf’s paper Analytic and Synthetic Judgements in Type Theory? He links synthetic judgements to the “existential form of judgement” since this requires the construction of an object, rather than merely checking by virtue of the meanings of the terms involved.

Posted by: David Corfield on August 11, 2016 8:21 AM | Permalink | Reply to this

Re: What is a Formal Proof?

(I didn’t notice this comment from yesterday.)

I’m not an expert, but from what I can tell, “analytic” and “synthetic” are about whether the (intuitionistic) truth of a judgment is “self-evident” (analytic) or not (synthetic).

That is, we derive a judgment like A,B ⊢ A×B using the pairing operation, and so on.

Judgments like that are synthetic: it’s not self-evident when a type is inhabited.

Semantically, each judgment is a “kind of object or morphism” in some categorical structure, and each derivation of that judgment represents a particular object or morphism.

This roughly corresponds to the fact that a naive syntactic representation of a derivation could be tacked onto the judgment to make it analytic. It is self-evident when a given derivation correctly derives a given judgment.

Now those derivation trees are generally too unwieldy to work with directly, so we annotate them by variables and terms in such a way that the entire tree can be reconstructed from the term that it assigns to the root, like x:A,y:B ⊢ ⟨x,y⟩ : A×B.

Except that’s not what ETT does, since a derivation generally cannot be reasonably reconstructed from a term. In fact, you’ve just pretty much described the whole idea of ITT. In ITT, a judgment like that is analytic, as a requirement. In ETT, a judgment like that is still synthetic.

There are three a priori reasonable responses to that problem, that I know of.

1) ETT is broken, so use ITT.

2) ETT terms are not proof terms, so add proof terms to get a different analytic judgment with “secondary witnesses”.

3) ETT judgments are synthetic, so don’t rely on proof terms, use LCF-style proof.

In each case, you get a way to reconstruct a derivation of the original “termless” judgment. But of course the termless judgment is not enough, because of dependency. (2) and (3) turn out to not affect the semantics of ETT, (1) changes things quite a bit. Perhaps OTT has essentially the same semantics as ETT.

(I am not going to bite your head off for restricting attention to intrinsic semantics of type theory, but be aware that there’s more to the story.)

Posted by: Matt Oliveri on August 11, 2016 3:27 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Hang on, so you regard A,BA×BA,B\vdash A\times B and x:A,y:Bx,y:A×Bx:A, y:B \vdash \langle x,y\rangle:A\times B as different judgments? I always thought the second was just an annotation of the first by a term. But I suppose you could regard the annotated judgment as a different judgment, like defining the graph of the annotation function as an inductive family on its own.

I don’t know what “self-evident” means. But it sounds like maybe you’re saying that an analytic judgment is one that can only be derived in one way? (That is, the hprops of the metatheory?) Or one for which there is an algorithm to decide whether it has a derivation? Please give me definitions, not hand-wavy words like “self-evident”.

Posted by: Mike Shulman on August 11, 2016 5:05 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Hang on, so you regard A,B ⊢ A×B and x:A,y:B ⊢ ⟨x,y⟩ : A×B as different judgments? I always thought the second was just an annotation of the first by a term.

Yes, that makes them plenty different. It’s not exactly the same distinction, but consider “P is provable” vs “p is a proof of P”. There’s an existential quantifier in the former, and it tends to push you from a decidable proposition to a semi-decidable proposition.

But I don’t mean to insist that “analytic” vs “synthetic” is necessarily decidable vs semi-decidable. It’s more hand-wavy and open-ended. A synthetic judgment may not be from a formal system at all.

I don’t know what “self-evident” means. But it sounds like maybe you’re saying that an analytic judgment is one that can only be derived in one way? (That is, the hprops of the metatheory?)

No, ITT’s judgments are considered analytic.

Or one for which there is an algorithm to decide whether it has a derivation?

Some people interpret it that way.

Please give me definitions, not hand-wavy words like “self-evident”.

It seems to me that the distinction is hand-wavy by design. But maybe not. Maybe you should talk to David Corfield about it.

Posted by: Matt Oliveri on August 11, 2016 6:53 PM | Permalink | Reply to this

Re: What is a Formal Proof?

It’s fine for philosophers to use hand-wavy words, but I want to do mathematics here. So if “analytic” and “synthetic” can’t be given precise meanings, maybe you could re-write this explanation without using them?

Posted by: Mike Shulman on August 11, 2016 7:09 PM | Permalink | Reply to this

Re: What is a Formal Proof?

It seems my usage of the terms was not critical:

--- (because they're synthetic, not analytic)
+++ (because it's impractical to systematically check them)

--- In systems with analytic judgments, you have proof terms,
+++ In some systems, you have proof terms,

Here it is in full.

Posted by: Matt Oliveri on August 11, 2016 7:34 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Except that’s not what ETT does

It seems to me that that means we should not use the same word “term” for the things that appear in ETT and ITT, since they are really fundamentally different things.

I am not going to bite your head off for restricting attention to intrinsic semantics of type theory, but be aware that there’s more to the story.

I don’t think I can be aware of that, because I don’t know what you mean by “intrinsic semantics”. To my mind semantics are always extrinsic to a theory: one constructs a model of the theory working in some metatheory.

Posted by: Mike Shulman on August 11, 2016 5:17 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Except that’s not what ETT does

It seems to me that that means we should not use the same word “term” for the things that appear in ETT and ITT, since they are really fundamentally different things.

I’m not sure I agree they’re fundamentally different things. They may be completely analogous things from fundamentally different systems.

Anyway, I think it’s a great idea to find a new terms for “term”. (Heh) The problem is “term” is so vague, but it’s good because it says we’re talking about structured syntax. So I recommend, at least for now, a scheme like “X term” vs “Y term”. In (exactly?) one message to the HoTT list, I used “proof term” vs “meaning term”. Except I consider ITT’s terms to be playing both roles simultaneously. (And that, ladies and gentlemen, is the essence of what’s wrong with ITT.) ETT terms are meaning terms and not proof terms.

It remains to decide (heh) whether proof terms need decidable typechecking. I suggest no, because practical things you do with proof terms don’t depend on decidability, they depend, in practice, on practically-feasible checking.

Posted by: Matt Oliveri on August 11, 2016 7:07 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’ll use the phrase “derivation term” for a term that represents a derivation. As I said elsewhere, it seems to me that for this to actually be a “representation” rather than a collection of hints to inform a search for a derivation, the process of deducing a derivation from a derivation term must be decidable. In this language, the question is whether or not derivation terms should be present at all; I’m arguing yes, in some form, because the whole point is to produce a derivation in some representation.

I’m not quite sure what distinction you’re making between “proof term” and “meaning term”. Is a “meaning term” one that’s “intended” to uniquely identify a semantic object? That is, different derivations that are labeled with the same “meaning term” are intended to represent the same morphism in a category? Whereas a “proof term” for you is… a term that isn’t a meaning term? I’m not sure how to isolate a class of terms other than “representing a particular derivation” (derivation terms, which you don’t mean because your “proof terms” don’t have decidable typechecking) and “representing a particular semantic object” (meaning terms, I assume).

In any case, I view it as a hallmark of a well-designed type theory that there is as little distance as possible between derivations and semantic meanings. In an ideal world, each derivation would represent exactly one semantic object, so that a “derivation term” and (what I’m assuming you mean by a) “meaning term” would be the same thing. (Why? Because type theory, in its role as a presentation of free categorical structures, is essentially about proving coherence theorems, and a good coherence theorem specifies “canonical forms” for objects and morphisms. Type-theoretic derivations are a way to generate only the canonical forms, or at least to exclude as many as possible of the non-canonical ones.) In practice this isn’t always possible, but it’s good to keep the distance as small as we can, and being able to use the same terms as derivation terms and meaning terms seems like a good indicator of this.

Posted by: Mike Shulman on August 11, 2016 7:23 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’ll use the phrase “derivation term” for a term that represents a derivation. As I said elsewhere, it seems to me that for this to actually be a “representation” rather than a collection of hints to inform a search for a derivation, the process of deducing a derivation from a derivation term must be decidable.

Then I will understand “derivation term” as what I call “proof term”, but with the requirement that checking be decidable, instead of the requirement that checking be feasible in practice.

Personally, I would’ve required a “derivation term” to be checkable in linear time. That would, of course, be a much narrower definition.

Is a “meaning term” one that’s “intended” to uniquely identify a semantic object?

No, because judgmentally equal meaning terms must denote the same object, even if the terms are different.

That is, different derivations that are labeled with the same “meaning term” are intended to represent the same morphism in a category?

To interpret a meaning term unambiguously, you generally need to also consider the context. In other words, a term can mean different things in different contexts. (Who’d’ve thought it?) In the absence of unique typing, a term can also mean different things at different types.

Whereas a “proof term” for you is… a term that isn’t a meaning term?

No. A proof term may or may not be a meaning term. “Proof”/”meaning” is a distinction between roles for terms. They don’t have to be disjoint sets.

A proof term is a term that can be checked by some algorithm to produce a derivation. The check doesn’t have to be decidable, but it has to be feasible in practice.

Posted by: Matt Oliveri on August 11, 2016 8:52 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Whereas a “proof term” for you is… a term that isn’t a meaning term?

No. A proof term may or may not be a meaning term. “Proof”/”meaning” is a distinction between roles for terms. They don’t have to be disjoint sets.

Of course I didn’t mean to be talking about individual terms. What I meant was something like “is a ‘notion of term’ considered to consist of ‘proof terms’ if it does not consist of ‘meaning terms’?”

To interpret a meaning term unambiguously, you generally need to also consider the context.

Of course what I meant is that different derivations of the same judgment labeled with the same meaning term must denote the same morphism. It’s not meaningful to ask that morphisms with different domains and codomains be equal.

Is a “meaning term” one that’s “intended” to uniquely identify a semantic object?

No, because judgmentally equal meaning terms must denote the same object, even if the terms are different.

That doesn’t contradict what I said. In other language, what I asked was “is a ‘meaning term’ one for which (or, if you insist on being pedantic, does a ‘notion of term’ ‘consist of meaning terms’ if) the relation between terms and their semantics is one-to-one, i.e. each term is associated to at most one semantics”? The question of whether the opposite of the relation is also one-to-one, i.e. whether the same semantic object could be associated to more than one term, is a different one.

Posted by: Mike Shulman on August 11, 2016 9:12 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Whereas a “proof term” for you is… a term that isn’t a meaning term?

No. A proof term may or may not be a meaning term. “Proof”/”meaning” is a distinction between roles for terms. They don’t have to be disjoint sets.

Of course I didn’t mean to be talking about individual terms.

We were being hasty with the language. Of course, we should always be talking about notions/sorts/languages of terms, and notions/forms/classes of judgment.

There is then, conceptually, the type of all notions of term. “Proof term” refers to the subset of notions of term such that… Likewise for “meaning term” and “derivation term”.

Similarly, you actually determine whether a form of judgment is decidable. An individual judgment does not constitute a decision problem. Likewise, I suppose, for “analytic” and “synthetic”.

What I meant was something like “is a ‘notion of term’ considered to consist of ‘proof terms’ if it does not consist of ‘meaning terms’?”

I understood you. The answer is the same, with the above explanation in mind. So “proof terms” and “meaning terms” are non-disjoint sets of notions of term. ITT’s terms are a notion of term in both sets. (Really I should call them “proof term notions” and “meaning term notions”, but that’s too long.)

Is a “meaning term” one that’s “intended” to uniquely identify a semantic object?

No, because judgmentally equal meaning terms must denote the same object, even if the terms are different.

That doesn’t contradict what I said…

Hmm you’re right, you asked if a meaning term uniquely identifies an object, and I said no, a meaning term is not unique among those that identify an object.

I tripped up over the English. I was actually thinking you changed directions on me. Derp.

Maybe you could’ve saved me by asking if a meaning term identifies a unique object.

Posted by: Matt Oliveri on August 11, 2016 11:49 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Let me explain by example why your “derivation term” is a practically useless criterion.

We use derivation terms for ETT of the form (J,n), where J is a judgment (representation) and n is a natural number. The number puts an upper bound on the size of the derivation.

So we can prove some hard equation with a term something like (fun _ _ => refl). The “real” proof is in an equality judgment derivation, of course. So the derivation term will be something like (⊢ fun _ _ => refl : <hard equation>, 538475623984).

How can this qualify as a derivation term? Because there are finitely many derivations of a finite size, so if the “proof” is bad, you will theoretically find out about it eventually.

Decidability has nothing whatsoever to do with usefulness of a proof representation, or closeness of proofs to honest derivations.

Posted by: Matt Oliveri on August 12, 2016 12:49 AM | Permalink | Reply to this

Re: What is a Formal Proof?

In any case, I view it as a hallmark of a well-designed type theory that there is as little distance as possible between derivations and semantic meanings…

I’ve never seen anyone propose this as a goal for type theory. This paragraph all seems very fishy. Will you go through the argument more slowly, or should I ignore it?

Posted by: Matt Oliveri on August 12, 2016 3:25 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I’ve never seen anyone propose this as a goal for type theory.

Well, now you have. Another reason for it that I didn’t mention is that it makes the proof of the soundness theorem easier if you don’t need to be constantly checking whether the result of your interpretation function is independent of derivation. What seems fishy about it to you?

Posted by: Mike Shulman on August 12, 2016 4:24 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I guess the main thing that sounds odd is:

Type-theoretic derivations are a way to generate only the canonical forms, or at least to exclude as many as possible of the non-canonical ones.

But type theory deliberately has non-canonical forms. Computation is expressed as reduction to canonical form.

And:

…it’s good to keep the distance as small as we can, and being able to use the same terms as derivation terms and meaning terms seems like a good indicator of this.

That seems to indicate that you have a strong preference for ITT, where the meaning terms are also derivation terms.

Anyway, do coherence theorems even address free categorical structures with natural numbers objects? I thought coherence theorems mostly use proof theory for very weak systems. If so, then it seems you’re judging apples as if they were oranges.

Another reason for it that I didn’t mention is that it makes the proof of the soundness theorem easier if you don’t need to be constantly checking whether the result of your interpretation function is independent of derivation.

I thought the CwF machinery and initiality theorem take care of this.

Posted by: Matt Oliveri on August 12, 2016 9:22 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Computation is expressed as reduction to canonical form.

Right – and ideally that’s a syntactic operation, which happens before you even talk about semantics, so that semantics only has to deal with the canonical forms. E.g. we can use a cut-free sequent calculus to present a free categorical structure, and then interpret derivations that use cut into categories by first eliminating cut, purely syntactically, then interpreting the resulting cut-free derivation.

That seems to indicate that you have a strong preference for ITT, where the meaning terms are also derivation terms.

Yes.

Anyway, do coherence theorems even address free categorical structures with natural numbers objects?

I would argue that under a loose reading of “coherence theorem”, any interesting presentation of a free categorical structure can be called a (partial) coherence theorem. So, for instance, the presentation of the free topos using IHOL or ETT is a partial coherence theorem for a category that in particular has an NNO.

it makes the proof of the soundness theorem easier if you don’t need to be constantly checking whether the result of your interpretation function is independent of derivation.

I thought the CwF machinery and initiality theorem take care of this.

Well, yeah, I’m talking about the proof of the initiality theorem.

Posted by: Mike Shulman on August 12, 2016 9:31 AM | Permalink | Reply to this

Re: What is a Formal Proof?

E.g. we can use a cut-free sequent calculus to present a free categorical structure, and then interpret derivations that use cut into categories by first eliminating cut, purely syntactically, then interpreting the resulting cut-free derivation.

Hmm, I see, but I wouldn’t have though this would be easier than the “obvious” strategy of interpreting judgmental equality derivations as equality.

Also, I don’t think you can avoid judgmental equality using cut elimination, if you have judgmental funext and NNO.

That seems to indicate that you have a strong preference for ITT…

Yes.

What do you think of OTT?

Well, yeah, I’m talking about the proof of the initiality theorem.

I feel that it’s short-sighted to optimize the whole language design for the convenience of a one-time metatheorem.

Posted by: Matt Oliveri on August 12, 2016 7:43 PM | Permalink | Reply to this

Re: What is a Formal Proof?

In the case of sequent calculus for a coherence theorem, it’s not necessarily that it’s easier, but that it gives you more information about the category because you have canonical representatives for morphisms. That’s the goal of a coherence theorem. I was intentionally picking a simple case to illustrate the point; it’s certainly true that things are more complicated for more complicated theories.

Given that the initiality theorem isn’t actually even proven yet, I don’t think it’s justified to sweep everything under its rug. Moreover, it’s not just a one-time theorem, because there isn’t just one type theory in the world; there are lots of interesting and important type theories and we care about semantics for all of them. So designing type theories so as to make these theorems easy to prove and generalize as needed is good. More importantly, as I mentioned in another comment, I think the ITT approach is more likely to generalize away from the strict model-category presentation viewpoint that the CwF machinery seems to tie you too; also it’s more likely to be invariant under equivalence of model categories.

I haven’t taken the time to understand OTT, but this comment of Dan’s suggests that I can view it as a “reasonable first step”, perhaps towards a cubical type theory?

Posted by: Mike Shulman on August 12, 2016 8:27 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Given that the initiality theorem isn’t actually even proven yet, I don’t think it’s justified to sweep everything under its rug.

I’m not sweeping it under the rug. Just weighing the inconvenience of a supposedly-harder metatheorem against the inconvenience of repeatedly working in a less friendly logic.

Moreover, it’s not just a one-time theorem, because there isn’t just one type theory in the world; there are lots of interesting and important type theories and we care about semantics for all of them. So designing type theories so as to make these theorems easy to prove and generalize as needed is good.

All other things being equal, yes. I think the slight disagreement is caused by different scenarios we’re assuming. I’m assuming a general-purpose, foundational system that a few people work out and many people use. You’re probably assuming exotic internal languages that a few people work out and then use.

Posted by: Matt Oliveri on August 12, 2016 8:50 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I haven’t taken the time to understand OTT, but this comment of Dan’s suggests that I can view it as a “reasonable first step”, perhaps towards a cubical type theory?

I asked because I put it in the ITT category–because it uses the same notion of term for proof terms and meaning terms–but it has set-level extensionality principles, and canonicity. This makes it the least unpleasant form of set-level ITT.

So for me, that makes it only moderately worse than ETT, but if you like it when meaning terms are derivation terms, you might like OTT better than ETT.

Come to think of it, I have no idea how cut elimination goes down in OTT. It may be that “meaning terms are derivation terms” is not really a good test for a system’s suitability for coherence theorems.

Posted by: Matt Oliveri on August 12, 2016 9:02 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t think I can be aware of that, because I don’t know what you mean by “intrinsic semantics”. To my mind semantics are always extrinsic to a theory: one constructs a model of the theory working in some metatheory.

Oh right. By “intrinsic semantics” I meant semantics where typing is intrinsic. Of course the semantics itself is extrinsic. The style of categorical semantics using CwFs, or similar, supports intrinsic typing only. With extrinsic typing, you reason about what type(s) an object has. (The object also has an intrinsic type, but it may be that all objects of interest have the same intrinsic type.)

Posted by: Matt Oliveri on August 11, 2016 7:16 PM | Permalink | Reply to this

Re: What is a Formal Proof?

With extrinsic typing, you reason about what type(s) an object has.

Yep, definitely not interested in that. (-:

Posted by: Mike Shulman on August 11, 2016 7:24 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m confused. If Andromeda doesn’t actually store a proof to be verified by another tool, then how can its verifier be smaller than that of Coq if it doesn’t have a verifier at all?

This was a question to Andrej about his use of the word “verifier”, but here’s what I think he meant, along with a bunch of relevant background:

The “verifier” in an LCF-style system checks the validity of proof steps as they are performed. In the case of Andromeda, this also involves building a term. One might be tempted to call it a “proof term”, but that’s misleading because it’s not checked or even feasibly checkable. It’s well-typed by construction. The proof steps are performed by Andromeda MetaLanguage (AML).

Formal systems that don’t use propositions-as-types make the LCF approach less subtle: terms are typechecked, proofs need not ever be built, but are known to exist because the steps were successful.

For Andromeda, it’s best to think of proof as the process of establishing the existence of a derivation of a judgment. It’s judgments that get proven (because it’s impractical to systematically check them), not types. Types have “evidence” (their inhabitants) but one cannot feasibly recognize evidence without proof that it has the expected type.

In some systems, you have proof terms, so an alternative to the LCF approach is to typecheck proof terms. This is what Coq and Agda do. Don’t be fooled into thinking Coq is LCF-style because it has tactics! It’s about the design of the trusted kernel. Coq’s kernel is a trusted typechecker. Andromeda’s “nucleus” is a trusted implementation of the primitive tactics. (Invoked from AML.)

Now why might an LCF kernel be simpler than a typechecking kernel? Probably because a rather LCF-ish kernel is a part of a typechecker! A typechecker will traverse a proof term, performing checks as it goes. These checks are exactly the checks that would be performed by trustworthy tactics. The beauty of LCF-style is the recognition that only the steps need to be checked. How they’re “fed into” the computer is irrelevant. There’s no technical need for proof terms, which merely script the proof steps in the least imaginative possible way.

I oversimplify: The reason switching Coq to an LCF-style kernel would actually probably not be so simple is termination checking. It’s a non-compositional check, so you cannot check the term in one pass, like you’d have to if proof terms were literally degenerate proof scripts. I see this as one bad idea chasing another. You wouldn’t need termination checking in the kernel if you could prove termination without gunking up the term. Woe is ITT. Or, if you’re happy with gunky terms, you could ask for an LCF-style system for plain ITT (without termination checking).

So to answer your question, the real reason why Coq’s kernel is larger than that of HOL Light or Andromeda is simply that Gallina is a dramatically more complicated language than HOL or extensional Type:Type. ;)

Posted by: Matt Oliveri on August 11, 2016 7:30 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Thanks for eliminating analytic/synthetic. Unfortunately there is still too much jargon here for me to understand. I feel like you are using “proof” at both an object-level and a meta-level. Is that true? What exactly do you mean by a “proof” here? Maybe you could re-explain it again using unambiguous words like “derivation”, “derivation term”, and “meaning term” (assuming we agreed on the meaning of the latter).

Posted by: Mike Shulman on August 11, 2016 8:12 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I guess that’s a good idea.
--------------

The “verifier” in an LCF-style system checks the validity of [derivation] steps as they are performed. In the case of Andromeda, this also involves building a [meaning] term. One might be tempted to call it a “proof term”, but that’s misleading because it’s not checked or even feasibly checkable. It’s well-typed by construction. The [derivation] steps are performed by Andromeda MetaLanguage (AML).

Formal systems that don’t use propositions-as-types make the LCF approach less subtle: [meaning] terms are typechecked [I was assuming something like HOL, with decidable typechecking.], [any form of formal proofs of propositions, which aren’t types] need not ever be built, but are known to exist because the steps were successful.

For Andromeda, it’s best to think of [derivation] as the process of establishing the existence of a derivation of a judgment. It’s judgments that get proven [sic] (because it’s impractical to systematically check them), not types. [This was an unsuccessful attempt to explain why, in ETT, the emphasis of formal proof shifts to the judgments. I don’t know how to update it, because I deliberately conflated derivations (for ETT) and proof terms (for ITT).] Types have “evidence” (their inhabitants) but [in ETT] one cannot feasibly recognize evidence without [a derivation] that it has the expected type.

In some systems, you have proof terms, so an alternative to the LCF approach is to typecheck proof terms. This is what Coq and Agda do. Don’t be fooled into thinking Coq is LCF-style because it has tactics! It’s about the design of the trusted kernel. Coq’s kernel is a trusted typechecker. Andromeda’s “nucleus” is a trusted implementation of the primitive tactics. (Invoked from AML.)

Now why might an LCF kernel be simpler than a typechecking kernel? Probably because a rather LCF-ish kernel is a part of a [sufficiently-sane] typechecker! A typechecker will traverse a proof term, performing checks as it goes. These checks are exactly the checks that would be performed by trustworthy tactics. The beauty of LCF-style is the recognition that only the steps need to be checked. How they’re “fed into” the computer is irrelevant. There’s no technical need for proof terms, which merely script the [derivation] steps in the least imaginative possible way. [Keep in mind, the derivation process also yields a meaning term.]

I oversimplify: The reason switching Coq to an LCF-style kernel would actually probably not be so simple is termination checking. [Which is not sufficiently-sane.] It’s a non-compositional check, so you cannot check the term in one pass, like you’d have to if proof terms were literally degenerate proof scripts. I see this as one bad idea chasing another. You wouldn’t need termination checking in the kernel if you could prove termination without gunking up the term. Woe is ITT. Or, if you’re happy with gunky terms, you could ask for an LCF-style system for plain ITT (without termination checking).

So to answer your question, the real reason why Coq’s kernel is larger than that of HOL Light or Andromeda is simply that Gallina is a dramatically more complicated language than HOL or extensional Type:Type. ;)

Posted by: Matt Oliveri on August 11, 2016 9:49 PM | Permalink | Reply to this

Re: What is a Formal Proof?

In your post, you used “proof” to refer to derivation trees, and thought of type-checking as converting “arguments” to proofs.

A consequence of this terminology is that most proof assistants for dependent type theory do not produce proofs, because most proof assistants don’t work with derivation trees. “Proof term” usually refers to a feasibly-typecheckable term, not a derivation tree.

Because of this, and also because you say you don’t care if a proof is deleted as soon as it’s made, I find your reply to Andrej very strange.

Shouldn’t it be sufficient, as Andrej is saying, to just check (semi-decidably) that an argument has, in principle, a corresponding proof?

Posted by: Matt Oliveri on August 10, 2016 2:00 AM | Permalink | Reply to this

Re: What is a Formal Proof?

“Proof term” usually refers to a feasibly-typecheckable term, not a derivation tree.

Actually, I mean feasible to typecheck in practice. Not in the worst case.

Posted by: Matt Oliveri on August 10, 2016 2:32 AM | Permalink | Reply to this

Re: What is a Formal Proof?

If you read what I wrote, I said that

a proof assistant … ought to include… some data structure that represents those proofs reasonably faithfully

I specifically didn’t claim that they ought to include actual proofs, among other reasons because those are mathematical abstractions that are nearly impossible to represent in reality. I don’t see a problem with a “proof term” not being itself a “proof”, just like a “movie script” is not the same as a “movie”.

Posted by: Mike Shulman on August 10, 2016 6:58 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Thanks, but that doesn’t answer my question. It actually makes me more confused about what your opinion is. But rather than point out the things that are making me confused, let my just try asking questions.

1) Under what conditions should a proof assistant store a proof representation?

2) What are the requirements for the proof representation being sufficiently close to derivation trees?

For reference, my answers are:
1) No condition
2) N/A

Proof assistants are for checking the reasoning that is inputted to them, not maintaining formal proofs, which are completely useless, computationally.

Keep in mind that a proof assistant for type theory needs to store terms. But not because they’re proof representations; because they denote objects you might use.

Posted by: Matt Oliveri on August 10, 2016 4:05 PM | Permalink | Reply to this

Re: What is a Formal Proof?

1) I don’t see any reason to store a proof representation in the sense of “saving it for later”. But for a proof assistant to be producing a proof it has to represent that proof somehow in the process of producing it.

2) “Sufficiently close” is a judgment call, and different people may make that call differently. I’m saying that decidability of type-checking is one reasonable, and actually fairly minimal, condition that gives some indication of “sufficient closeness”.

Maybe an analogy will help: consider the relationship between spoken English and written English. Leaving aside nuances of tone of voice and so on, we can regard written English as a representation of spoken English. Of course, there are lots of strings of letters we can write, like “bh aued ntviu iuml wmvieo”, that don’t represent any spoken English phrase; but we can tell immediately whether a string of letters does in fact represent a spoken phrase or not.

But now suppose that there were infinitely many ways to try to “pronounce” a given string of letters, and the only way to tell whether a string of letters represented some spoken phrase was to try pronouncing it in all possible ways. In some sense it would still be possible to “read”: if a book were actually written in correct written English then we would be sure of being able to finish it, but if there were a mistake somewhere then our process of reading might hang forever without our ever knowing whether we would be able to finish the book. Would it really be reasonable, in such a situation, to say that written English was a representation of spoken English?

Or, to put it even more baldly: if you are happy with semi-decidable type checking of “proof representations”, then here is a proof representation for you. Every proof is represented by the word “foo”. The type-checking algorithm first checks that it’s been given the word “foo”, and then generates all possible arguments in some other language and checks one by one whether they are a proof of the desired statement. Is “foo” really a representation of a proof of Fermat’s Last Theorem?

Posted by: Mike Shulman on August 10, 2016 4:56 PM | Permalink | Reply to this

Re: What is a Formal Proof?

1) I don’t see any reason to store a proof representation in the sense of “saving it for later”. But for a proof assistant to be producing a proof it has to represent that proof somehow in the process of producing it.

Fair enough. I was taking it for granted that you wouldn’t produce a proof that you’re not going to use. So I have to add a question:
3) Under what conditions should a proof assistant produce a proof representation?

My answer:
3) Only when the user asks it explicitly to convert the inputted proof representation to output it in a different format. (E.g. for compatibility with other proof assistants.)

Did you think it was a technical requirement somehow to create a proof in order to check that a proof exists? Programmers aren’t that constructive! :)

The whole idea of LCF-style proof assistants is precisely that nothing like a derivation tree needs to be built. It’s as Andrej says: The (trace of the) process of proof (“argument”) checking itself contains a derivation.

2) “Sufficiently close” is a judgment call, and different people may make that call differently. I’m saying that decidability of type-checking is one reasonable, and actually fairly minimal, condition that gives some indication of “sufficient closeness”.

We’ll have to postpone discussion of this till we have an agreed-upon reason to be working with proofs in the first place.

Maybe an analogy will help: consider the relationship between spoken English and written English…

This is an interesting analogy, but it doesn’t fit tightly enough. Humans are happy to invent pronunciations for written sentences. When the pronunciation of a sentence affects its meaning, the written form is often misinterpreted.

Or, to put it even more baldly: if you are happy with semi-decidable type checking of “proof representations”, then here is a proof representation for you. Every proof is represented by the word “foo”…

Yes, for an RE formal system, brute force search is the canonical semi-decidable proof checker. I’m not happy with semi-decidable checking. I demand checking that’s feasible in practice. Neither semi-decidability nor decidability is sufficient. The point is that decidability is not necessary or even helpful. In case it doesn’t go without saying, I also demand that there exists an argument that will convince the checker if and only if a derivation exists.

…and then generates all possible arguments in some other language and checks one by one whether they are a proof of the desired statement.

Even in this deliberately absurd scenario, you have an odd assumption. The checker would not need to generate arguments in any concrete representation just to semi-decide the existence of a derivation. Only terms and judgments would need to be represented. Note that this applies even to systems where terms are not a reasonable proof representation. (E.g. ETT, HOL)

Posted by: Matt Oliveri on August 10, 2016 5:55 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m going to postpone answering (3) until I get an answer to this question.

When the pronunciation of a sentence affects its meaning, the written form is often misinterpreted.

That’s irrelevant to the point of the analogy.

I demand checking that’s feasible in practice. Neither semi-decidability nor decidability is sufficient.

What does it mean for a non-decidable procedure to be feasible? By “in practice” do you just mean that the algorithm terminates in a reasonable amount of time in “most real-world” situations? That’s certainly a useful thing to have, but I don’t think it’s sufficient for the “proof term” that goes into the algorithm to count, mathematically and philosophically, as a “representation” of the output proof. In the absence of decidability, at best the “proof term” is a fairly explicit hint to assist an algorithm in searching for a proof, even if the hint is explicit enough that in most practical cases the search is over pretty quickly. (My thanks to you, by the way, for helping me clarify this point in my head about why decidability is important.)

The checker would not need to generate arguments in any concrete representation just to semi-decide the existence of a derivation. Only terms and judgments would need to be represented.

I don’t even understand the distinction you are making, since terms and derivations are both a particular kind of “argument” as I am using the word; but in any case this also seems irrelevant to the point.

Posted by: Mike Shulman on August 10, 2016 6:15 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m going to postpone answering (3) until I get an answer to [this question].

I’ve replied to the question. Hope it helps.

That’s irrelevant to the point of the analogy.

So I guess written English corresponds to arguments, spoken English corresponds to proofs, and I don’t understand the analogy. The best way to deliver spoken English is spoken English. The best way to deliver a proof is an argument.

What does it mean for a non-decidable procedure to be feasible? By “in practice” do you just mean that the algorithm terminates in a reasonable amount of time in “most real-world” situations?

Yes.

That’s certainly a useful thing to have, but I don’t think it’s sufficient for the “proof term” that goes into the algorithm to count, mathematically and philosophically, as a “representation” of the output proof.

You are right that it’s really a utility measure disguised as a requirement. Of course, if it’s too slow too often, too many people will consider it effectively useless. Like brute force search.

I’m not interested in what counts philosophically as a representation of a proof. Mathematically, we define a notion of proof. It’s just math, so it doesn’t matter if it’s computationally silly. It’s “optimized” for obviousness. When it comes to practical implementations, I’m saying proof (argument) checking that isn’t decidable can make good sense, and successful checking can entail the existence of whatever mathematical notion of proof you like.

In the absence of decidability, at best the “proof term” is a fairly explicit hint to assist an algorithm in searching for a proof, even if the hint is explicit enough that in most practical cases the search is over pretty quickly.

This is a good way to look at it. I see sufficiently explicit hint as an ideal for arguments. Taking this idea to an extreme is the idea of “oracular proof”, where the proof consists literally of a sequence of numbers indicating the winning branch to take at each point of nondeterminism in an (agreed-upon) proof search algorithm. Of course, the hard part is agreeing upon a search algorithm, and finding winning choice sequences with it. (I can hunt down a reference, if that summary doesn’t make it Googlable.)

“Fairly explicit hint” may be going too far for “proof term”, but I don’t see a great need for proof terms in actual implementations anyway. (Except as a natural fragment of checkable arguments.)

Posted by: Matt Oliveri on August 10, 2016 8:45 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t even understand the distinction you are making, since terms and derivations are both a particular kind of “argument” as I am using the word; but in any case this also seems irrelevant to the point.

Oh, I thought an argument had to be sufficient to find a proof.

So in FOL (or HOL), you have propositions and proofs as distinct classes of syntax, right? With type theory, you have judgments and derivations instead, as rough analogues. Judgments actually get represented, derivations don’t. Or for FOL, propositions actually get represented, proofs don’t.

I’m just remarking that you seem to take it for granted that there’s some representation of proofs/derivations. There usually isn’t.

Posted by: Matt Oliveri on August 10, 2016 9:25 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Now I don’t know what you mean by “represented”. Andrej said he was using “proof representation” in place of my “argument”, so according to that usage proofs certainly do get represented. Terms are one thing that can represent them.

Posted by: Mike Shulman on August 10, 2016 9:29 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Terms are one thing that can represent them.

Not in ETT, if checking is to be practically-feasible.

Posted by: Matt Oliveri on August 10, 2016 10:12 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Terms are one thing that can represent them.

Not in ETT, if checking is to be practically-feasible.

By “can” I meant sometimes, i.e. for some notions of “term” in some theories.

Posted by: Mike Shulman on August 10, 2016 10:18 PM | Permalink | Reply to this

Re: What is a Formal Proof?

>> Terms are one thing that can represent them.
> Not in ETT, if checking is to be practically-feasible.
By “can” I meant sometimes, i.e. for some notions of “term” in some theories.

Cool. So then by:

I’m just remarking that you seem to take it for granted that there’s some representation of proofs/derivations. There usually isn’t.

I meant to point out that in a proof assistant for ETT, it’s reasonable for there to not be any decidable representation of derivations, and I had thought this contradicted an assumption of yours. (I left out the “decidable”. I guess I thought it was implicit since you made it a minimal requirement.)

Sorry about the sloppy phrasing.

Posted by: Matt Oliveri on August 10, 2016 10:50 PM | Permalink | Reply to this

Re: What is a Formal Proof?

If I have to explain the analogy any further, then it probably won’t do you any good, so I won’t bother. The other example conveyed much the same point.

Maybe programmers can define a proof to be whatever is most useful to call a proof, but as a mathematician, I am interested in exactly what proofs are. I would like a formal proof to actually be a proof, not just a hint suggesting to a computer how it could go about finding a proof.

Posted by: Mike Shulman on August 10, 2016 9:25 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Maybe programmers can define a proof to be whatever is most useful to call a proof, but as a mathematician, I am interested in exactly what proofs are. I would like a formal proof to actually be a proof, not just a hint suggesting to a computer how it could go about finding a proof.

That’s fine. I hope my other replies have cleared up the situation. If not, point out a logic and a what you count as being actually a proof for it, and I’ll try to explain how to establish provability using either proof term checking or LCF-style tactics, and which one seems better-suited.

Posted by: Matt Oliveri on August 10, 2016 10:22 PM | Permalink | Reply to this

Re: What is a Formal Proof?

That’s certainly a useful thing to have, but I don’t think it’s sufficient for the “proof term” that goes into the algorithm to count, mathematically and philosophically, as a “representation” of the output proof.

OK, I think I’m starting to get your position, and I can try to summarize the difference from my (Andrej’s?) position.

I think you’re taking for granted that there are necessarily proof terms somewhere in a proof assistant. This is false. Terms are not necessarily proof terms. (Think HOL.) Even with propositions-as-types, terms are not necessarily proof terms. (Think ETT.) In LCF-style systems, if there are proof terms at all, it’s merely an emergent special case of tactic scripts.

Coq and Agda are very unusual systems for depending fundamentally on a language of proof terms. (Because of termination checking in the definition of their logics.)

Plain ITT also happens to come with proof terms, but it also makes sense to prove inhabitance of a type without constructing a term.

I’m starting to think that the assumption that proof checking is exactly typechecking of proof terms, is responsible for the whole anti-equality-reflection culture. You can do proof checking that way, and in very particular systems (like ITT), it’s natural. But for most logics it’s weird, because a notion of proof term is entirely auxiliary to the logic and its semantics anyway.

If you’re not going to demand ITT (or a descendant), it seems you should not demand proof terms either.

Posted by: Matt Oliveri on August 10, 2016 10:08 PM | Permalink | Reply to this

Re: What is a Formal Proof?

The entire post was an explanation of why I think there should be a faithful representation of derivation trees somewhere in a proof assistant. So no, I am not taking it for granted; I am arguing for it. I am fully aware that “terms” in many theories are not faithful representations of derivation trees, which is why I phrased everything carefully to avoid talking only about terms.

Posted by: Mike Shulman on August 10, 2016 10:20 PM | Permalink | Reply to this

Re: What is a Formal Proof?

The entire post was an explanation of why I think there should be a faithful representation of derivation trees somewhere in a proof assistant. So no, I am not taking it for granted; I am arguing for it.

Ah, progress! So what about throwing away that representation as soon as you build it? Still OK with that? It seems pointless to me. It would be equivalent, but more efficient, to not build it. In fact, it seems obviously pointless. That’s what confused me.

I am fully aware that “terms” in many theories are not faithful representations of derivation trees, which is why I phrased everything carefully to avoid talking only about terms.

Thank you for your care, and sorry about going over that issue all over again.

Posted by: Matt Oliveri on August 10, 2016 11:06 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I think there is too much confusion going on because we are using “formal proof” instead of “formal derivation”. I am going to switch to “formal derivation”.

And just to be sure, let me call proof-checking a procedure which accepts a representation r of a formal derivation and outputs the conclusion c of the formal derivation represented by r, provided that there exists a formal derivation that r represents. Depending on what the procedure does with an invalid r it is:

  1. decidable: if it terminates and reports failure
  2. semi-decidable: if it might run forever

We shall assume that we always have at least a semi-decidble procedure. Furthermore, we shall also require:

  • completeness: every formal derivation has a representation
  • derivation-irrelevance: r may represent several formal derivations, and it does not matter which one is found (explicitly or implicitly) by the procedure.
  • uniqueness of conclusion: all formal derivations represented by r have the same conclusion

Derivation-irrelevance means that r represents an inhabited set of formal derivations, while uniqueness of conclusion states that all formal derivations in this inhabited set have the same conclusion.

Now, to answer your question: under the stated assumptions, foo just by itself violates uniqueness of conclusion, but the pair (foo,C) where C is the conclusion is a valid reprenstation of a formal derivation.

The thing called “proof certificate” or “proof term” by Coq/Agda has the exact same status as (foo,C), except that it carries a little more useful information that increases efficiency, and also restricts the set of formal derivations represented. It does not cut down the set to a singleton because it does not specify how equalities should be derived.

In the realm of proof assistants, the game is how to communicate and verify existence of formal derivations efficiently so the only interesting questions are those of efficiency. The only reason we reject (foo,C) as the universal proof representation is that it is too inefficient in time. It’s got great space efficiency, though.

In fact, we can imagine a formal system in which (foo,C) would be the perfect representation of the existence of a formal derivation of C. For instance, if we are in possesion of a complete linear-time proof-search procedure, then there is no point in communicating proofs, because already reading them takes as long as it does to reconstruct them.

Posted by: Andrej Bauer on August 11, 2016 9:52 AM | Permalink | Reply to this

Re: What is a Formal Proof?

The thing called “proof certificate” or “proof term” by Coq/Agda has the exact same status as (foo,C)

Really? Doesn’t it have a decidable proof-checking procedure versus a semi-decidable one for (foo,C)?

It does not cut down the set to a singleton because it does not specify how equalities should be derived.

I believe that for some type theories, at least, there is a way to ensure uniqueness, by using a “canonical-only” presentation where there is actually no equality judgment. Isn’t that right?

In the realm of proof assistants, the game is how to communicate and verify existence of formal derivations efficiently so the only interesting questions are those of efficiency.

Why is that the game? I’m a mathematician. I don’t just care whether something has a proof; I care what the proof is. When someone claims to have proven something, I want them to be able to actually produce a proof, not just say “you can figure this out on your own in the time it would take me to tell you about it.”

Posted by: Mike Shulman on August 11, 2016 5:25 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Why is that the game? I’m a mathematician. I don’t just care whether something has a proof; I care what the proof is. When someone claims to have proven something, I want them to be able to actually produce a proof, not just say “you can figure this out on your own in the time it would take me to tell you about it.”

It’s important to realize that the constructivist notion of proof as a mathematical object need not be a decidably-checkable formal proof. (Indeed, proof objects are semantic, not syntactic.) You should not care about formal proofs, beyond their mathematical content, I figure.

Remember: the things we’re “proving” (deriving) with tactics are judgments. The constructive “proof” objects (evidence) are of types.

If you insist on trivializing the distinction between judgments and types, you are practically forced to use ITT.

Posted by: Matt Oliveri on August 11, 2016 7:47 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t know what you mean by a “proof object”, but to me a “proof” is indeed a syntactic thing. Constructivism is irrelevant here; I’m talking about the way I as a mathematician interact with proofs, regardless of whether those proofs use classical logic or not.

Posted by: Mike Shulman on August 11, 2016 8:08 PM | Permalink | Reply to this

Re: What is a Formal Proof?

In that case, I don’t believe you. I accuse you of not actually caring which derivation gets used for an ETT judgment of the form (Γ ⊢ t : T).

Posted by: Matt Oliveri on August 11, 2016 9:01 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I care which derivation is used in the sense that if you claim to have derived it, I want you to actually have a derivation of it, not just tell me that I can produce a derivation of it.

Posted by: Mike Shulman on August 11, 2016 9:14 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I care which derivation is used in the sense that if you claim to have derived it, I want you to actually have a derivation of it, not just tell me that I can produce a derivation of it.

This is false. You have no right to ask me to “have a derivation” in a specific form that you desire, because I am an agent of thought independent of you. You have no insight into how I work internally. But I can tell you:

I run formal derivations as processes in time. That is all you need to know.

I claim that if you reflect on your own mathematical activity, you will come to a similar conclusion. But I cannot be sure because I have no insight in how you work internally.

I am unwilling to record my derication processes, and I am unwilling to communicate them to you, because I lack resources to do so. I am willing for the two of us to establish a form of communication that will allow me to communicate advice on how you might reconstruct formal derivations for yourself. If you are crazy enough to store the formal derivations in a notebook or on a computer, that’s your problem. I do not care.

Posted by: Andrej Bauer on August 11, 2016 9:44 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t see that you wanting me to have a derivation implies that you care which derivation it is. I’m sticking to my accusation.

Posted by: Matt Oliveri on August 11, 2016 10:46 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Let me try again.

Formal derivations are never constructed explicitly by machines, they are never looked at by mathematicians, they are completely unilluminating to a human, and are just something that happens as a process. They are not objects, ever, in practice. What you learned from logic makes you think they are objects that must exist somewhere. They do exist, but not where you expect them. They exist as processes. I really cannot put it more clearly than that, I think.

When you say Print myTheorem Coq does not show you a formal proof. If it did you would be immediately lost, except for the very simplest statements. Therefore, wisely, Coq hides most of the proof from you behind notations, implicit arguments, universe polymorphism, etc. What you see is not a formal proof!

As a mathematician, you want arguments to convince you of truth and to show you how it is done. In human practice you get both by reading arguments written by humans, but those are much closer to Coq scripts and Agda input than they are to formal proofs.

You seem to be asking that formal proofs play the same double role as human arguments do, which I think is a design mistake. Machines are (currently) good at establishing correctness, but not at explaining how it is done. Humans still need to explain to humans how it is done. And we do this by writing Coq script and Agra inputs, which others can then replay as processes. Hopefully, we also write some helpful comments along the way.

It is impreative that we observe what actually happens in practice. Traditional logic and foundations of mathematics can benefit enormously from some feedback from practice. We have squeezed the juice out of the (revolutionary) insights of traditional formal logic. Now is the time to review and refine, and adapt logic to the era of computers.

Posted by: Andrej Bauer on August 11, 2016 9:39 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I think I may be inching my way closer to understanding. Is the following a correct description of an “LCF-style” proof assistant?

  1. Write a proof assistant that actually represents derivations internally as the elements of an inductive family of judgments, whose constructors are the rules.

  2. Observe that the fact that this family is inductively defined is only relevant for meta-theory such as soundness and admissibility theorems. For purposes of constructing specific derivations, all we need are the constructors of the family, not its induction principle. Thus, instead of constructors that actually construct a tree, using up time and memory to do so, we can replace them by “fake constructors” having the same types but that don’t actually do anything. As long as the family of judgments is abstract enough to guarantee that these “fake constructors” are the only way to construct elements of them, any program we write using them could equally well be run with the “real constructors” to produce a real proof.

If this is what is meant by “detailed proofs are implicit in the execution traces of the proof checker”, then it seems to me that it actually does satisfy my requirement of “some data structure that represents proofs reasonably faithfully”. For isn’t it just representing the inductive family of judgments with an impredicative encoding? Just like any element of X,X(XX)X\forall X, X\to (X\to X) \to X gives me a natural number by applying it to \mathbb{N}, 00 and succsucc, here you’re defining a function with a sufficiently abstract type that it could be applied to the actual inductive family of judgments to produce an actual proof.

Posted by: Mike Shulman on August 10, 2016 10:25 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I think I may be inching my way closer to understanding. Is the following a correct description of an “LCF-style” proof assistant? …

It’s very close! The main distinction to realize is that it’s generally not the types of the constructors that ensures that only valid derivations are built, it’s checks in the execution of the constructors. Just like in Ltac. You can run a tactic, and it’ll be like “Nope.”. That’s not a typechecking error for Ltac; Ltac is untyped. (It might be a typechecking error for Gallina, since Ltac can invoke the Gallina elaborator.)

But yes, you have abstract constructors for derivations, but they don’t actually construct derivation trees; they are “fake”. Ltac constructors are fake because they only construct terms, which may not even be well-typed. LCF-style constructors are fake because they construct nothing, but at least they could’ve constructed valid derivations, as you say. (Assuming they don’t report an error instead.)

Also, these “fake constructors”, which are the primitive tactics, need to maintain the proof state. The details of this vary from system to system, but it involves bookkeeping of terms and judgments, and is necessary to perform sufficient checking.

(Ltac is untyped, and AML is Hindley-Milner-typed, but apparently with one big type for “judgments” (fake derivations). You can also have tactic languages that use types to track the specific judgment that a tactic attempts to prove. (That’s a more recent development.) The important, common feature is that successful execution of a tactic means a derivation exists. It’s an engineering trade-off whether to do checking when a tactic is loaded (due to typechecking the tactic), when it’s run, or some combination.)

Andromeda’s “fake constructors” are evidently called “judgment computations”, and are documented on this section of the Andromeda site. Andromeda proof is complicated (but hopefully worth it) by the algebraic effect/handler mechanism. If the judgment computations are you trying to convince the computer of something, the algebraic effect stuff allows the computer to demand an explanation of a detail you left out. (You can also have one part of your tactic library demand an explanation from you, or another part of your tactic library.)

Whew.

If this is what is meant by “detailed proofs are implicit in the execution traces of the proof checker”, then it seems to me that it actually does satisfy my requirement of “some data structure that represents proofs reasonably faithfully”.

Actually, it’s much more literal than that. Think of verifying a program. You need a definition of the programming language. Suppose you use operational semantics. (i.e. state machines) An execution trace is a sequence of steps. Traces of successful checking of proofs, using your LCF-style system, is a particular subset. Soundness says that from each such trace you can get the judgment that was checked successfully, and a derivation of that judgment. (And a bunch of other stuff we don’t care about.)

It’s much more intuitive to just read soundness as “If it says it’s proven, there’s a proof.”, but saying the trace contains the derivation seems to suggest why there should not be any philosophical preference for “proofs as data” over “proof as a process”.

For isn’t it just representing the inductive family of judgments with an impredicative encoding? Just like any element of ∀X,X→(X→X)→X gives me a natural number by applying it to ℕ, 0 and succ, here you’re defining a function with a sufficiently abstract type that it could be applied to the actual inductive family of judgments to produce an actual proof.

Naw.

1) There’s a difference between the way abstract types are used in actual programs, and impredicative encodings, which are virtually never used.

2) ∀X,X→(X→X)→X is isomorphic to the inductive type of natural numbers, so you cannot possibly be avoiding the construction and storing of something with impredicative encodings, you know? In practice these data are big and slow.

3) It’s more important that fake derivations be an abstract type in the language with tactics (AML), not the language used to implement the kernel (OCaml). In the implementation language, do whatever you want, as long as it’s correct. The abstraction ensures soundness no matter how incorrect the tactics you run are.

Posted by: Matt Oliveri on August 11, 2016 12:43 AM | Permalink | Reply to this

Re: What is a Formal Proof?

∀X,X→(X→X)→X is isomorphic to the inductive type of natural numbers…

Oh wait, that’s not generally true is it? Well anyway, the implementation that you get with that type is garbage.

Posted by: Matt Oliveri on August 11, 2016 12:46 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t fully understand what you are saying, but it seems as though maybe the distinctions you are making are all artifacts of implementation, which for present purposes I don’t care about. For instance, when I said “representing the inductive family of judgments with an impredicative encoding”, I didn’t mean that the programming language in question (I don’t care which language we’re talking about) actually used such a representation, just that mathematically it was isomorphic to one that did. I would consider this:

Axiom NAT : Type.
Axiom ZERO : NAT.
Axiom SUCC : NAT -> NAT.

Definition three : NAT := SUCC (SUCC (SUCC ZERO)).

to be using an impredicative encoding just as much as this is:

Definition three : forall X, X -> (X -> X) -> X :=
  fun NAT ZERO SUCC => SUCC (SUCC (SUCC ZERO)).

And having one type of judgments whose elements store the particular judgment in question (maybe this is what you mean by “proof state”?), rather than an actual indexed family of judgments, seems a necessary concession to the fact that the implementation language is not dependently typed. Of course that means an application of a rule might be well-typed in the implementation language but still fail, but morally it’d be because it was ill-typed in the real dependently typed version.

Posted by: Mike Shulman on August 11, 2016 1:27 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I would consider this:

Axiom NAT : Type.
Axiom ZERO : NAT.
Axiom SUCC : NAT -> NAT.

Definition three : NAT := SUCC (SUCC (SUCC ZERO)).

to be using an impredicative encoding just as much as this is:

Definition three : forall X, X -> (X -> X) -> X :=
  fun NAT ZERO SUCC => SUCC (SUCC (SUCC ZERO)).

Weird. Technically, neither requires impredicativity. But the second becomes more capable in the presence of impredicativity.

Posted by: Matt Oliveri on August 11, 2016 4:44 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m using the phrase “impredicative encoding” just to refer to a particular way of working with an inductive type. It doesn’t require the ambient type theory to have impredicative universes in the technical sense (although there is a precise sense in which the second, at least, is an “impredicative definition”). If you’ve got a better word for this, please share.

Posted by: Mike Shulman on August 11, 2016 4:58 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t understand why you consider those two the same “way of working with an inductive type”, so I can’t recommend a word. But I think that if you’re thinking about fancy types, you’re headed in the wrong direction to understand LCF provers.

Posted by: Matt Oliveri on August 12, 2016 1:01 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Of course that means an application of a rule might be well-typed in the implementation language but still fail, but morally it’d be because it was ill-typed in the real dependently typed version.

That’s probably not right. Strong type systems for programming languages let you establish that a program will never fail. But a proof checker is required to fail when given an invalid proof. The test for a failure scenario must be part of the execution of the program.

Maybe you’re thinking of code being generated and typechecked at runtime, but that’s not how LCF-style checkers work.

Posted by: Matt Oliveri on August 12, 2016 3:40 AM | Permalink | Reply to this

Re: What is a Formal Proof?

You’re using “fail” in two different ways. The required “failure” of a proof checker is not the sort of “failure” that is gaurded against by a strong type system. When you write a program of type Option Int Error and it returns Right (raise "oops"), that’s not prevented by the type system because it’s not a type error, but it’s still a semantic error in the sense that it signals something wrong with the input. That’s what I meant by “fail”.

Posted by: Mike Shulman on August 12, 2016 4:22 AM | Permalink | Reply to this

Re: What is a Formal Proof?

That’s exactly the point. You can’t always use types to handle the sort of failure that proof checkers deal with. So primitive tactics do not (conceptually or in reality) have types that guarantee correct derivations. They themselves check that the derivation is proceeding correctly. Correct derivations (that aren’t actually constructed) are guaranteed if the tactics run successfully. (And you can express that with a type in the right type system.)

So about your example:

Of course that means an application of a rule might be well-typed in the implementation language but still fail, but morally it’d be because it was ill-typed in the real dependently typed version.

That’s wrong. The application of a rule may fail no matter how fancy the type system in which it typechecks is, simply because it correctly rejected an invalid instance of the rule.

Maybe I misunderstood your whole explanation of how you were thinking about LCF-style systems. The rules are implemented by the primitive tactics, but maybe you weren’t thinking that way.

Posted by: Matt Oliveri on August 12, 2016 6:36 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Can you please give an example of a fully dependently typed rule that is applied in a well-typed way but fails? I can’t think of any.

Posted by: Mike Shulman on August 12, 2016 6:49 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Something like:

modusPonens : forall f a:Judg,
  option {fa:Judg | exists A B:Fmla,
    Proves f (Impl A B) /\ Proves a A /\
    Proves fa B}

You’re probably going to ask, why not:

modusPonens : forall (f a:Judg) (A B:Fmla),
  Proves f (Impl A B)->Proves a A->
  {fa:Judg | Proves fa B}

Well, that would make perfect sense if the tactic language you’re actually implementing will actually know how to check that type. But most tactic languages don’t have such types, so we can only model the way tactics actually do checking.

Posted by: Matt Oliveri on August 12, 2016 8:43 AM | Permalink | Reply to this

Re: What is a Formal Proof?

You must take into account the fact that the meta-language on top of type theory is simply typed. Then you will be able to think of a rule application that looks correct from the simply-typed ML-level but is actually not.

For instance, impliciation elimination is a smart constructor of type imply_elim : judgment -> judgment -> judgment. If we already computed judgments j1 and j2, it can easily happen that the well-typed computation imply_elim j1 j2 will trigger an exception because j1 is not of correct form, i.e,. not an implication.

You will not be able to get around this by saying that ML itself should be dependently typed. Somebody has to implement ML in a language that is not dependently typed, and at that level the same phenomenon will happen. It cannot be dependent type theories all the way down.

Posted by: Andrej Bauer on August 12, 2016 8:43 AM | Permalink | Reply to this

Re: What is a Formal Proof?

What part of the italicized phrases in

it was ill-typed in the real dependently typed version

and

Can you please give an example of a fully dependently typed rule

was unclear?

I am perfectly aware that the actual implementation language is simply typed. In fact, the original paragraph you are objecting to was exactly my acknowledgment of this fact. My point is that if you write it all in a dependently typed language then there is no “failure”; but because the actual implementation language is not dependently typed there will be failed applications. I.e. exactly what you are saying.

Posted by: Mike Shulman on August 12, 2016 9:22 AM | Permalink | Reply to this

Re: What is a Formal Proof?

One thing to realize is just that it’s the tactic language that would have to be dependently-typed (or some other more special-purpose type system) to get away with no-check tactics. The implementation language can be whatever.

I wanted to emphasize that no-check tactics and checking tactics are different programs. You cannot look at a runtime checking failure in a checking tactic, and say it’s morally a type error. At least, I find that way of thinking very misleading. I showed you strong types for both checking and no-check tactics. Hopefully you can imagine how the implementations would be well-typed.

Posted by: Matt Oliveri on August 12, 2016 9:40 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I wanted to emphasize that no-check tactics and checking tactics are different programs. You cannot look at a runtime checking failure in a checking tactic, and say it’s morally a type error.

Come to think of it, you probably meant the runtime check failures in tactics are morally typecheck errors for applications of rules that you could hypothetically do in a language for metatheory. This is right. The checks a tactic does are essentially the ones a dependent typechecker would do given derivation tree constructions.

Posted by: Matt Oliveri on August 12, 2016 6:24 PM | Permalink | Reply to this

Re: What is a Formal Proof?

(deep breath)

I apologize for snapping at you; that was uncalled-for. I should know by now that any blog post or comment written late at night should not be posted until the morning. (-:O This whole discussion has been more stressful than usual for some reason.

Posted by: Mike Shulman on August 12, 2016 9:46 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Why do you say it’s misleading? It makes perfect sense to me; if you tried to apply modus ponens to a first argument that’s not an implication, or to a second argument that’s not the hypothesis of the first argument, then that’s a type error because that’s not what modus ponens does.

Posted by: Mike Shulman on August 12, 2016 9:49 AM | Permalink | Reply to this

Re: What is a Formal Proof?

True, it’s not what the modus ponens rule does–the actual constructor for a type of derivation trees. But a tactic is not a rule in that sense. I am talking about typing of tactics, not the original rules.

A checking tactic for modus ponens must accept any two fake derivations. By the invariant of the abstract type, we can think of them as valid derivations. But the tactic needs to check for itself that the derivations actually fit into a modus ponens instance, before it can output a fake derivation of the conclusion, otherwise we violate the invariant.

Posted by: Matt Oliveri on August 12, 2016 6:00 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I think maybe you thought I was confusing two things that I’m not. I was not talking about tactics at all; I was talking about the constructors that actually build derivations (or fake it). Those don’t fail in a dependently typed version, and their simply typed versions would only have to do checks because they aren’t dependently typed. But when checking a user term in an attempt to produce a derivation by applying the constructors (I think this is what you mean by a tactic?), there’s always going to be the possibility that the user did something wrong, hence always the possibility of failure, even in a dependently typed version. That’s a different kind of “failure”.

Posted by: Mike Shulman on August 12, 2016 6:08 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I thought in your mental model, the tactics were the “constructors that actually build derivitions (or fake it)” (they definitely fake it). Tactics always fake it; rules (constructors for derivation trees) never do. I have no idea why you were thinking about simply-typed rules.

But when checking a user term in an attempt to produce a derivation by applying the constructors (I think this is what you mean by a tactic?)…

There’s no separate machinery for checking “user terms”. The user enters a proof by running tactics (constructors that fake it). With a syntax like AML’s, most tactic calls look like term formers, so you can often read an AML program as an object language term, but with funny tricks. But really, the user interface is some kind of REPL for the tactic language.

(With a dependently-typed tactic language, the REPL would of course typecheck a user-entered tactic call before running it, so you get away with no-check tactics even interactively.)


I’m thinking now that you basically get it, but I’ve been misinterpreting your explanations, and thus thinking you didn’t get it.

Posted by: Matt Oliveri on August 12, 2016 7:02 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m thinking now that you basically get it, but I’ve been misinterpreting your explanations, and thus thinking you didn’t get it.

Yes, that’s what I think too; though I’ve done my share of misinterpretation of your explanations. (-:

Posted by: Mike Shulman on August 12, 2016 7:17 PM | Permalink | Reply to this

Re: What is a Formal Proof?

You’re almost right, except that these “smart constructors” do something: they output the conclusion of the derivation (and the only way to generate conclusions is by using the smart constructors). I am going to write yet another blog post with a very minimalistic LCF-style prover, to show how it’s done.

Posted by: Andrej Bauer on August 11, 2016 8:21 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I know this conversation has gotten pretty hairy, but I consider this comment to be an important loose end. So I wanted to point it out again.

I said:

So what about throwing away that representation as soon as you build it? Still OK with that? It seems pointless to me. It would be equivalent, but more efficient, to not build it.

because, way back when, Mike said:

My issue is a purely philosophico-mathematical one: if a proof assistant is assisting me to create a proof, shouldn’t it actually create a proof? I don’t care whether that proof is stored as a “certificate” or just deleted as soon as it’s made.

I assume “proof” back then meant what Mike now calls “derivation term”.

Posted by: Matt Oliveri on August 12, 2016 1:20 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I can’t believe it’s taken me so long to think of this. But I suppose often that’s the way it works: I have an intuition about something and only after whacking my head against it for a while and saying a lot of nonsense (in this case, publically) does it become obvious to me why the intuition is right and how to make it precise. Feel free to ignore everything else I’ve said and respond to this comment as if it were tacked onto the end of the original blog post.

Here is a concrete criterion for whether a proof assistant “produces a reasonably faithful representation of a derivation” in the sense that I want, along with a practical reason to care. Suppose I have a constructive proof of the soundness/initiality theorem, i.e. a function defined by structural induction on derivation trees; some kind of autophagy perhaps, or just a formalized proof of the ordinary initiality theorem relative to contextual categories or what-have-you. Does your proof assistant spit out (or can it be trivially modified to spit out) something that I can plug into this function to produce an actual interpretation of a proof?

This is actually something that I would want to do. One of the main points of categorical semantics for type theory is that we can interpret the same proof in many different categories. Once we are doing all of our mathematics in a proof assistant, this sort of interpretation is part of the mathematics we will want to do. But it would sure be nice if we could use the same ambient proof assistant to write the proofs that we then want to interpret into other categories, rather than having to essentially code up a whole new proof assistant internally in type theory.

Posted by: Mike Shulman on August 12, 2016 4:33 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Suppose I have a constructive proof of the soundness/initiality theorem, i.e. a function defined by structural induction on derivation trees; some kind of autophagy perhaps, or just a formalized proof of the ordinary initiality theorem relative to contextual categories or what-have-you. Does your proof assistant spit out (or can it be trivially modified to spit out) something that I can plug into this function to produce an actual interpretation of a proof?

Before, I wrote:

So I have to add a question:
3) Under what conditions should a proof assistant produce a proof representation?

My answer:
3) Only when the user asks it explicitly to convert the inputted proof representation to output it in a different format. (E.g. for compatibility with other proof assistants.)

So it looks like you thought of another reason to “convert” a proof to another “format”.

The short answer is “yes”. But I wonder if you’re making a category error. The proof assistant deals with terms and proof scripts, which are concrete, formal representations of mathematical abstractions. You want to plug this into a constructive theorem (so it is technically a program) so that it constructs abstract mathematical objects. Not formal representations, the real deal, modulo that they’re implementations of constructive objects.

To do this, you’d need to implement (as a programming language) the constructive metalanguage in which your semantic objects are defined. (Or perhaps a non-classical fragment of your classical metalanguage.) If your semantics happens to have been invented in a CS department, this would probably be pretty natural (at least in theory; it’s a strange design, engineering-wise). But I’m guessing that’s not what you’re thinking of.

What you can also do, which you might like better, is translate to terms of the metalanguage in which the semantics were developed. This does not even technically require that the semantics is constructive. (But still, if it’s not, the translated instances may not be illuminating.)

This is actually something that I would want to do…

So this requires a fair bit of engineering. You have an internal language and an external language, so you want usable proof assistants for both. You need the semantics/translator in practically-executable condition. Probably you want this all over a single trusted kernel, not two or more, so it would be some kind of logical framework, able to reliably check proofs in multiple logics.

I am slowly working on a system, mostly for computer scientists, but that would support this workflow as a special case. Don’t hold your breath.

You could try bugging Andrej’s team to try and support this sooner rather than later/never.

For currently-existing systems, the main things I’d personally look at first are Twelf and Isabelle.

…But yeah, this is definitely something you can do with LCF-style systems in principle. The whole point Andrej keeps going over is that you run through the derivation while checking the proof script, so you can pull out any data you want on the fly. You make the primitive tactics, the “fake constructors”, do some actual constructing.

Since the primitives are part of the trusted kernel, you probably don’t want to change it every time you change semantics. Instead, you’d probably add an option once to dump something analogous to a .vo file, for specialized conversion outside the kernel. (I suppose now you’re going to declare victory, but note that this intermediate representation still has no good reason to be decidable.)

An added benefit of an intermediate form is a phase separation. You could add oracles for the slow, proof-searchy parts, so that rechecking the proof is faster for your friends. I guess you’d give them the original too, in case they want to change the development. Or maybe they don’t mind redoing the proof search in any case. This is essentially build system issues.

Posted by: Matt Oliveri on August 12, 2016 6:01 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Rest assured, I’m under no illusions that this would be easy with present technology; nor do I have any particular need or desire to actually do it in the near future. I’m just saying that I want proof assistants to be designed in such a way that it is possible. And yes, I’m perfectly aware that there would be a level-mixing going on — although the whole point would be not to need two different proof assistants but to be able to use the same one for the metalanguage and the object language, i.e. for the autophagy to really be eating itself. (Yada yada Godel universes blah.) And yes, I agree that constructiveness in the sense of avoiding LEM is irrelevant; I said that just to emphasize that I’m talking about a function whose actual output I care about rather than a “mere proof”.

So if the “execution trace” representation of a derivation tree can be trivially tweaked to spit out an actual derivation that can be plugged into an interpretation function, then I’m happy (at the moment). I reserve the right to change my mind again later, but right now I think that this is all I was after. Decidability of terms was a red herring after all, because we need (and can get) something even better.

I do wish that some other type theorists had turned up to defend decidability. I would really like to see an actual two-sided debate on the subject.

Posted by: Mike Shulman on August 12, 2016 6:54 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I do wish that some other type theorists had turned up to defend decidability. I would really like to see an actual two-sided debate on the subject.

Maybe if you write another post stating yourself that decidability is “a red herring after all”, it will call some defenders over.

I’m curious why ITT people like ITT. Maybe decidability is just a sanity check, and what they really value is something else about ITT.

Posted by: Matt Oliveri on August 12, 2016 9:22 AM | Permalink | Reply to this

Re: What is a Formal Proof?

The reason I like ITT is because it permits HoTT.

Posted by: Mike Shulman on August 12, 2016 9:23 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Does HTS not also permit HoTT? That’s definitely ETT family.

Posted by: Matt Oliveri on August 12, 2016 7:10 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Oh, did you mean to ask why people don’t like equality types that have reflection rules?

HTS sort of permits HoTT. But it at least appears to tie the semantics more closely to model-categorical presentations in which there is a “point-set level equality” present with which to interpret the strict equality. We don’t yet know how to interpret ITT without a model-categorical presentation either, but it seems more likely to me that this will be possible with ITT because its judgmental equality is more controlled than HTS’s strict equality.

Posted by: Mike Shulman on August 12, 2016 7:18 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Oh, did you mean to ask why people don’t like equality types that have reflection rules?

No.

HTS sort of permits HoTT. But it at least appears to tie the semantics more closely to model-categorical presentations in which there is a “point-set level equality” present with which to interpret the strict equality.

I’m pretty sure that’s not a consequence of ITT vs ETT. Thorsten Altenkirch, Paolo Capriotti, and Nicolai Kraus have a HoTT with Strict Equality based on ITT + K that I think has the same issue.

…it seems more likely to me that this will be possible with ITT because its judgmental equality is more controlled than HTS’s strict equality.

I don’t know. There must be a version of HoTT that we could both approve of. My problem with ITT is that it gunks up the terms with hints needed for proof checking, but that correspond to nothing in the semantics. Find a semantics that takes advantage of Book HoTT’s abstraction, then design good, gunkless meaning terms for it, and you will have the right system.

Posted by: Matt Oliveri on August 12, 2016 8:32 PM | Permalink | Reply to this

Re: What is a Formal Proof?

If I say “I don’t like ETT because X”, and you say “ETT isn’t the only thing that has X”, then of course I reply “well, then, that means I don’t like those other things either”. (-:

(I’m not dogmatic – these theories have their uses, and I’ve used them myself. I’m just saying I’d rather not have to use them; reified strict equality feels like a kludge that a more elegant theory can do without.)

The question of “gunk” in terms seems to me to be just an issue of finding good implicit-arguments and inference/elaboration tools at user-level, not anything about the design of the underlying type theory (i.e. the derivation trees).

However, this discussion has diverged rather from the main topic of this post, so if you want to continue it, let’s take it to email.

Posted by: Mike Shulman on August 12, 2016 10:01 PM | Permalink | Reply to this

Re: What is a Formal Proof?

This sort of functionality is not present in the current proof assistants, and it would require a lot of technical effort to modify them.

Even after the technical modification, the tool will not actually produce an object x which is the formal derivation and feed it into your translation function f to botain the translated y. At least I wouldn’t do it this way. Instead, I would provide hooks that would incorporate your f into the process which corresponds to x, but exists in time, to produce instead a modified process which corresponds to y. So, as a result, you would not get y itself (believe me, you do not want it), but you would get the conclusion of y that you could trust based on a meta-theorem.

It’s the sort of thing that is in the back of my mind, and one reason why I want an extremely flexible nucleus. By the way, in most cases you’d be interested in translating a fragment of a theory, not the entire dpendent type theory.

Posted by: Andrej Bauer on August 12, 2016 8:51 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m okay with “lots of technical effort”; I just want the architecture to make it possible.

Posted by: Mike Shulman on August 12, 2016 9:25 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t have a lot of time for this now, but I like the discussion. Would you be willing to state which other meta-properties are crucial? (I do not recall which properties NuPrl or andromeda have, for instance).

Posted by: Bas Spitters on August 12, 2016 9:34 AM | Permalink | Reply to this

Re: What is a Formal Proof?

If that was addressed to me (with “crucial” meaning “I want them”?), then I think I would need you to give me a list of meta-properties to pick from. (-:

Posted by: Mike Shulman on August 12, 2016 9:37 AM | Permalink | Reply to this

Re: What is a Formal Proof?

We clearly want consistency, canonicity seems important too, Coq is not strongly normalizing, …

(I am on holiday, so I should really not be discussing this now. However, this discussion is going on at the moment.)

Posted by: Bas Spitters on August 13, 2016 7:34 AM | Permalink | Reply to this

Re: What is a Formal Proof?

Consistency is, of course, guaranteed by the soundness theorem. But if we’re smashing the idol of decidability, we shouldn’t take anything else for granted. What are the real reasons to want canonicity?

Posted by: Mike Shulman on August 15, 2016 5:11 AM | Permalink | Reply to this

Re: What is a Formal Proof?

I don’t know for sure about Andromeda, but all the other “constructive” type theories seem to have canonicity. For NuPrl it is an important part of the meaning explanations and the computational interpretation. See e.g. Dan Licata’s post and the recent papers on Computational Higher Type Theory.

So the discussion seems different than for decidability where there are reasonable constructive type theories without this property.

Posted by: Bas Spitters on August 30, 2016 2:25 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I’m not sure what sense can be given to the phrase “all constructive type theories have canonicity” other than by essentially taking it as the definition of the word “constructive”. Book HoTT is “constructive” in another sense, but it doesn’t have canonicity.

Posted by: Mike Shulman on August 30, 2016 6:39 PM | Permalink | Reply to this

Re: What is a Formal Proof?

I was thinking about type theories without HITs and univalence. I guess that is why you say book HoTT does not have canonicity?

Posted by: Bas Spitters on August 30, 2016 9:38 PM | Permalink | Reply to this

Re: What is a Formal Proof?

Yes, in that case. But more generally, I think a theory can be “constructive” in some senses of the word without having canonicity.

Posted by: Mike Shulman on August 30, 2016 9:41 PM | Permalink | Reply to this

Re: What is a Formal Proof?

This new paper on a strongly normalizing computational rule for univalence in higher order minimal logic sums up what is needed for a meaning explanation:

  • confluence
  • normalization
  • every closed nf of a type is a canonical object
Posted by: Bas Spitters on October 5, 2016 9:18 PM | Permalink | Reply to this

Re: What is a Formal Proof?

This new paper on a strongly …

I really wish people wouldn’t provide blind links directly to .PDF files, especially when the file has a web page home.

Robin Adams, Marc Bezem, Thierry Coquand: A Strongly Normalizing Computation Rule for Univalence in Higher-Order Minimal Logic, arXiv:1610.00026

See? I can tell that Coquand is an author without having to download a PDF.

Posted by: RodMcGuire on October 6, 2016 3:21 PM | Permalink | Reply to this

Post a New Comment