Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

January 4, 2011

Entailment and Implication

Posted by Simon Willerton

I have never quite got my head around the relationship between entailment and implication. Of course, having written this short post I now have a better understanding of my misunderstandings, at least. I was wondering if anyone could shed any more light on my thoughts.

To take a concrete example of the kind of thing I have in mind, in Lawvere’s paper Metric spaces, generalized logic and closed categories he defines the category 2\mathbf{2} of truth values true\mathrm{true} and false\mathrm{false} where there is a morphism aba\to b if and only if aa ‘entails’ bb, written aba\vdash b. We ‘obviously’ have

falsefalse;falsetrue;truetrue.\mathrm{false}\vdash \mathrm{false};\quad\mathrm{false}\vdash \mathrm{true};\quad \mathrm{true}\vdash \mathrm{true}.

What does it really mean that, for instance, false entails true?

I guess I understand entailment to mean that if you are working in some logical system with rules of deduction then “aa entails bb” corresponds to there being a sequence of deductions which starts with aa and ends with bb. Are then the above three entailments just axiomatic in “classical logic”?

With the above definition of entailment using deductions, it follows that entailment is transitive and reflexive and so 2\mathbf{2} is a category.

Lawvere observes that the logical operation and, denoted \wedge, (also called conjunction by logicians who wish to confuse me) makes 2\mathbf{2} into a monoidal category, and that with this monoidal structure it also has an internal hom structure which is just implication, denoted \implies. The condition satisfied by an internal hom becomes “(ab)c(a\wedge b)\vdash c if and only if a(bc)a \vdash (b\implies c)”.

The actual values of the internal hom, as we all know, are as follows.

(falsefalse)=true;(falsetrue)=true;(\mathrm{false}\implies \mathrm{false})=\mathrm{true};\quad(\mathrm{false}\implies \mathrm{true})=\mathrm{true}; (truefalse)=false;(truetrue)=true.(\mathrm{true}\implies \mathrm{false})=\mathrm{false};\quad(\mathrm{true}\implies \mathrm{true})=\mathrm{true}.

So it appears that aba\implies b is precisely the truth value of aba\vdash b. Is that an appropriate way in which to interpret implication? Are there logical systems in which that does not hold or does not make sense?

Posted at January 4, 2011 5:00 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2315

38 Comments & 0 Trackbacks

Re: Entailment and Implication

and, denoted \wedge, (also called disjunction by logicians who wish to confuse me)

That’s not only confusing but malicious. And, denoted \wedge, is conjunction.

Hope that clears up some of your disfusion.

Posted by: Tom Leinster on January 4, 2011 6:38 PM | Permalink | Reply to this

Re: Entailment and Implication

I think that’s about how I understand the difference. Implication is an “operation on truth-values/propositions/statements” which produces, given two such things PP and QQ, another such thing PQP\Rightarrow Q of the same type. By contrast, entailment is a statement about two such things, not itself a thing of the same sort. One might say that entailment belongs to the “metalogic” in which one describes the logic.

There do exist logical systems which do not include implication. One such system, which is of great importance in topos theory, is geometric logic. However, systems of this sort usually do include entailment. More simply, any poset which is not a Heyting algebra could be regarded as having a notion of “entailment” (namely \le) but no notion of implication.

Elementary logic books tend to obscure the distinction, since when you have implication, then you can reduce entailment to a notion of “truth”, since PQP\vdash Q precisely when (PQ)\vdash (P\Rightarrow Q), i.e. when PQP\Rightarrow Q is true/valid/provable.

Posted by: Mike Shulman on January 4, 2011 8:23 PM | Permalink | Reply to this

Re: Entailment and Implication

The last result Mike mentions is generally called the deduction theorem (or a special case of it). It holds for just about all the first-order deductive systems you’re likely to meet, but as it says at that page,

there are first-order systems in which new inference rules are added for which the deduction theorem fails.

Posted by: David Corfield on January 5, 2011 9:29 AM | Permalink | Reply to this

Re: Entailment and Implication

It fails for some modal logics like provability logic which is the logic you get by introducing a modal operator □ that means “it can be proved that”. (In the sense that Godel’s incompleteness theorem is about statements that assert their own unprovability.) It got me pretty confused for a while because one of the deduction rules (shared by many modal logics) is that from A you can deduce □A. But A→□A is not a theorem. Intuitively that makes sense. If you’ve managed to prove A then clearly A is provable. But you can’t in general expect to prove A→□A.

Posted by: Dan Piponi on January 13, 2011 12:32 AM | Permalink | Reply to this

Re: Entailment and Implication

Is this a failure of the deduction theorem, though? It’s not surprising that AAA \to \square A isn’t a theorem, but I would expect that AAA \vdash \square A is not derivable in general, either.

Rather, the introduction rule for necessity looks like: AΓA\frac{\cdot \vdash A}{\Gamma \vdash \square A}

(Or have a two-part context, with validity assumptions as well as truth assumptions.) So we can only derive AAA \vdash \square A when AA itself is a theorem (or AA is assumed valid), and thus that is the only situation in which AAA \to \square A is a theorem.

Or is there something extra weird going on in provability logic? Or, the above is a natural deduction presentation, is there something amiss about Hilbert systems for provability logic?

Posted by: Dan Doel on January 13, 2011 10:35 PM | Permalink | Reply to this

Re: Entailment and Implication

Like many mathematicians not trained as logicians or computer scientists I was brought up in the language of Hilbert systems and am not very good at holding conversations in the language of natural deduction. So in the language of Hilbert systems, and going back to basics to ensure I’m not misunderstanding your language:

According to my first textbook on logic, A⊢B (syntactic entailment) means that starting from A I can deduce B. The “generalization rule” of provability logic (and many modal logics) says that if A is a theorem, so is □A. So from A I can deduce □A (simply by applying this derivation rule). So I think it’s correct to say A⊢□A even though A→□A is not a theorem.

Meanwhile, my textbook also says the Deduction Theorem is this: S⊢(s→t) iff S∪{s}⊢t. But this doesn’t hold for s=A and t=□A in provability logic.

So I think I’m correct to say that this is a ‘failure’ of the deduction rule. (Not a surprise really because the proof of the deduction rule explicitly makes use of the structure of propositions in propositional calculus which don’t generalise to provability logic.) In addition, I’ve seen some discussion of this exact subject on the web, and I think in a paper, but as usual you can never find these things again…

Posted by: Dan Piponi on January 14, 2011 12:38 AM | Permalink | Reply to this

Re: Entailment and Implication

In the case of that generalization rule, it comes down to, I think, “what is a theorem?” Usually I think the definition is, “a formula deducible from the empty context/no hypotheses/something of that sort.” So the fact that you can hypothesize AA to prove AA does not make AA a theorem. And this is also what is being said in the upper part, “A\cdot \vdash A”, of the rule I gave. If you can derive AA from no premises (it’s a theorem), then you get A\square A with any set of hypotheses you want.

So, a little closer to what you wrote, “AA is a theorem” means “{}A\{\} \vdash A”, and so, if {}A\{\} \vdash A then {}A\{\} \vdash \square A. We could weaken that to, if {}A\{\} \vdash A then SAS \vdash \square A, but we cannot (always) get SAS \vdash A iff SAS \vdash \square A (although the direction, if SAS \vdash \square A then SAS \vdash A should work).

Anyhow, this would again mean that AAA \vdash \square A only if {}A\{\} \vdash A, which is not true for all AA.

Posted by: Dan Doel on January 14, 2011 2:59 AM | Permalink | Reply to this

Re: Entailment and Implication

We must be talking at cross-purposes here as this stuff is very elementary and there isn’t really room for disagreement. :-)

The deduction theorem says that for any S, S⊢(s→t) iff S∪{s}⊢t. To find a counterexample I just need to pick an S for which this fails. I pick S={}, as well as s=A and t=□A.

(I don’t understand when you say “we cannot (always) get S⊢A iff S⊢□A”. I don’t see the relevance.)

But this might be relevant: I just remembered a point from something I saw in one of the articles on provability logic and the deduction theorem that I can’t find. People who use Hilbert systems say that it fails and people who use natural deduction say that it doesn’t. I think the thrust of the paper was that the Hilbert system people were doing it all wrong. Aha, found the abstract on this page.

Posted by: Dan Piponi on January 14, 2011 2:53 PM | Permalink | Reply to this

Re: Entailment and Implication

The deduction theorem says that for any S, S⊢(s→t) iff S∪{s}⊢t. To find a counterexample I just need to pick an S for which this fails. I pick S={}, as well as s=A and t=□A.

I don’t believe this is a counterexample, though. I believe that whenever AAA \to \square A is not a theorem, you will be unable to derive AAA \vdash \square A.

(I don’t understand when you say “we cannot (always) get S⊢A iff S⊢□A”. I don’t see the relevance.)

My point was to consider S={A}S = \{A\}. AAA \vdash A always holds, but AAA \vdash \square A only holds when A{} \vdash A. To pick a particular, \bot \vdash \bot, but not {} \vdash \bot, so not \bot \vdash \square \bot. Unless provability logic is different from other modal logics in this regard (it may be).

But this might be relevant: …

Quite possibly. I’ve never looked at Hilbert systems for modal logic, and it seems like the rules for modality would be weird for them. It’s certainly not an additional axiom. I guess it’s closer in status to modus ponens, but while that is “if SABS \vdash A \to B and SAS \vdash A then SBS \vdash B” the modal introduction is more like “if{}Aif \{\} \vdash A then SAS \vdash \square A”. So it’s not even, “if I can derive these two things from the same premises, I can derive this third thing from those premises.” It’s, “if I can derive this one thing from one set of premises, I can derive this other thing from a different set of premises.”

Anyhow, what Sridhar Ramesh wrote below may also be relevant. That is, just because you can reason from SAS \vdash A to SBS \vdash B doesn’t mean that you can reason S{A}BS \cup \{A\} \vdash B. And in modal logic, it is this, with S={},B=AS = \{\}, B = \square A that (I can agree) breaks.

Posted by: Dan Doel on January 14, 2011 4:41 PM | Permalink | Reply to this

Re: Entailment and Implication

Thanks for bearing with me. I suddenly saw what my mistake was. You really couldn’t have explained any more clearly. Sridhar perfectly describes the kind of entailment that I was erroneously imagining ⊢ to include, and gives the correct statement of what I meant when I said the deduction theorem “fails” in this context.

Posted by: Dan Piponi on January 14, 2011 9:46 PM | Permalink | Reply to this

Re: Entailment and Implication

(Note: In reading the following, you should think of a logical system as a notion of category-with-structure, and particular theories within that logical system as particular such categories-with-structure; propositions correspond to objects, morphisms to proofs of entailments/sequents, and equality of morphisms to equality (in a suitable proof-theoretic sense) of proofs. Thus, Hom(A,B)Hom(A, B) corresponds, in some sense, to (the set of proofs of) ABA \vdash B. Furthermore, ABA \wedge B (conjunction) corresponds to the categorical product A×BA \times B [as the nullary case, \top (empty conjunction; i.e., truth) corresponds to the terminal object 11], and ABA \supset B (implication) corresponds to the categorical exponential B AB^A. The modal operator \Box, in turn, is a product-preserving endofunctor whose exact further details are not of terrible importance to us right now)

There’s two slightly different senses in which one might refer to the “deduction theorem”: one is the sense in which proofs of ΓAB\Gamma \vdash A \supset B correspond to proofs of ΓAB\Gamma \wedge A \vdash B; this is just the defining adjunction of exponentiation in a cartesian closed category, which can be viewed as the intro/elim rules for implication in a suitably sequent-style presentation. This still holds perfectly well in provability logic. This is the sense referred to by Mike Shulman above (though not using the term “deduction theorem”).

Another sense in which one might refer to the “deduction theorem” is as a sense in which proofs of ABA \vdash B, in the background context of whatever assumed entailments, correspond to proofs of B\top \vdash B given the further assumption of an entailment A\top \vdash A; that is, as a correspondence between morphisms from AA to BB in a category-with-structure CC, on the one hand, and, on the other, global sections of BB in the free category-with-structure extending CC with a global section of AA. For example, cartesian closed categories (modelling propositional intuitionistic logic) have this property as well; freely augmenting a cartesian closed category C with a global section of some object A results in the Kleisli category C[A] of the functor AA \wedge - , and there is an obvious correspondence between C[A](,B)C[A](\top, B) and C(A,B)C(A, B).

This second sense is the sense of “the deduction theorem” which fails in provability logic, suitably construed; provability logic does indeed allow one to conclude A\top \vdash \Box A from A\top \vdash A, as noted, but does not have AAA \vdash \Box A as a general theorem. (A more categorical way to put this failure is that the \Box functor is not an enriched functor [for the cartesian closed structure]; we do not have ABABA \supset B \vdash \Box A \supset \Box B. The failure of this functor to be enriched with respect to internal implication is characteristic of “modality”)

That is, the distinction between the two senses of “the deduction theorem” is once again one between different levels of entailment; the turnstile of sequents vs. the horizontal line of rules of inference between sequents (an even more “external” level of entailment than the turnstile), which is to say, looking at the hom-sets of a category vs. looking at the result of freely adding a global section to a category.

Posted by: Sridhar Ramesh on January 14, 2011 1:09 PM | Permalink | Reply to this

Re: Entailment and Implication

(Of course, it should be stressed, the notion of “freely adding” a morphism of some sort to a category-with-structure is, naturally, dependent on what exactly the extra “structure” is taken to be; freely adding a morphism to a category-with-this-structure may be different from freely adding the same sort of morphism to a category-with-that-structure, even if the two have the same underlying categories)

Posted by: Sridhar Ramesh on January 14, 2011 4:44 PM | Permalink | Reply to this

Re: Entailment and Implication

Well, we tend to think of entailment as being “really true”, an external conclusion, while implication tends to be construed as an internal thing… you can think of it, if you like, as having two systems of logic which are supposed to mirror eachother.

I wonder if I can try to make things better by temporarily making this worse?

A sufficient condition for a logic being interesting to me is whether it admits a theory of categories of models; the case most familiar to me is first-order logic interpreted within set theory via the Tarski “definition of truth,” which is a fairly mechanical thing. One reason this makes things interesting is that a-priori you can define two sorts of entailment: one syntactic, \vdash, and the other semantic, which is usually denoted by \models.

Semantic entailment is defined relative to any universe, language, and meaning of truth by the condition that ΘΦ\Theta \models \Phi precisely when all models verifying Θ\Theta also verify Φ\Phi, where Φ\Phi is a proposition and Θ\Theta a set of propositions. This is potentially a very large relation. I.I.r.c., Goedel proved that under some conditions, the particular relation \models is reasonably reducible to first-order logic finitary \vdash; these are among completeness and compactness theorems. Also, by definition, there are no models which verify false, so every such model also verifies true, so false semantically entails true. Though maybe that doesn’t help much?

There are theoretically useful logics which don’t enjoy compactness — a trivial example trying to get at what is really true about the natural numbers is a logic in the language of Peano arithmetic and having a deduction schema {Φ[n]|n=0,S0,SS0,}x:Φ[x] \{ \Phi[n] | n=0,S0,S S0, \dots \} \vdash \forall x: \Phi[x] in which you can have proofs that are infinitely long; e.g., this system formalizes small instances of Goodstein’s theorem without quite mentioning ϵ 0\epsilon_0. And in this system there are theorems which aren’t reducible to tautologies. But this logic is still of limited practical use. For example, it also decides the twin primes conjecture, but knowing that doesn’t help us discern the answer.

Posted by: some guy on the street on January 4, 2011 8:43 PM | Permalink | Reply to this

Re: Entailment and Implication

Entailment and implication are in the same relationship as the Hom-functor and exponentials in a category.

Posted by: Andrej Bauer on January 4, 2011 9:01 PM | Permalink | Reply to this

Re: Entailment and Implication

Entailment and implication are in the same relationship as the Hom-functor and exponentials in a category.

Well, yes. That’s what I was saying, but what I perhaps failed to say is that I don’t know how to think of this. Maybe I should just take this as the definition and not worry about how to think of it. But this is not how most mathematicians think of implication. “Implication is an internal hom” would not strike a chord with most mathematicians.

Posted by: Simon Willerton on January 5, 2011 2:46 PM | Permalink | Reply to this

Re: Entailment and Implication

Mike said, in part, that implication is something inside the logic and entailment is something outside. Personally, I don’t tend to find phrases such as ‘this is a statement in the logic, but that’s a statement in the metalogic’, or ‘this is inside the formal system, but that’s outside’, very compelling. Mathematics is in some sense one big formal system; every mathematical statement we make has to be logical. (You can’t say ‘the Riemann Hypothesis is true because your mother says so’; or you can, but it won’t count as mathematics.) I think that’s why I don’t find that narrative very helpful.

But as Andrej’s comment hints, categorical logic can make the logic/metalogic distinction perfectly precise. In particular, it can help to clarify the distinction between implication and entailment. I’ll have a go at explaining.

Let’s start with posets. Take a poset PP that, viewed as a category, is cartesian closed. Binary products give us a greatest lower bound ABA \wedge B for each A,BPA, B \in P. The terminal object is a greatest element 1P1 \in P. The exponential B AB^A is usually written (AB)(A \to B), and its universal property is that ABCA \wedge B \leq C if and only if A(BC)A \leq (B \to C).

Lemma  Let A,BPA, B \in P. Then ABA \leq B if and only if 1(AB)1 \leq (A \to B), if and only if (AB)=1(A \to B) = 1.

Proof  By the universal property of \to and the fact that 11 is greatest, ABA \leq B iff 1AB1 \wedge A \leq B iff 1(AB)1 \leq (A \to B) iff (AB)=1(A \to B) = 1.

Now let XX be a set and PP the power set of XX, ordered by inclusion. We have AB=ABA \wedge B = A \cap B, 1=X1 = X, and (AB)=X(AB)=(XA)B(A \to B) = X \setminus (A \setminus B) = (X \setminus A) \cup B.

Suppose that you and I are doing some work together. We have a set XX, and two particularly interesting subsets AA and BB. We finish our discussion. Some hours later, I phone you and say:

Dude, ABA \subseteq B!

Or I might equivalently say:

Dude, every element of AA is an element of BB!

(I’d be particularly likely to put it this way if we had adjectives for membership of AA and BB: ‘dude, every ample element is Borromean!’) Or, I might equivalently say:

Dude, xA  xBx \in A  \vdash  x \in B!

OK, I wouldn’t actually say this, but it does mean the same thing. (Were I more conscientious, and not speaking down the phone, I would decorate that statement with some further notation to indicate that the ‘xx’ mentioned is supposed to range over XX.)

Those are three equivalent ways of saying the same thing. Here are a further three equivalent ways:

Dude, (AB)=X(A \to B) = X!

Dude, every element of XX is in (AB)(A \to B)!

Dude, x(AB)\vdash x \in (A \to B)!

The absence of stuff to the left of the ‘\vdash’ means that the right-hand side is true under no assumptions at all on xx. In this example it seems forced to mention (AB)(A \to B) at all, but I guess there are circumstances where it’s natural.

The first three are equivalent to each other because they’re just different idioms for saying the same thing. The same goes for the second three. But the first three are equivalent to the second three exactly because of the lemma.

Now let’s try something more ambitious. Let \mathcal{E} be a category (not necessarily SetSet) and XX \in \mathcal{E}. There is a category Mono(X)Mono(X) whose objects are monos AXA \rightarrowtail X and whose maps are commutative triangles. It is in fact a preorder, i.e. each hom-set has at most one element. A subobject of XX is an object of Mono(X)Mono(X) (or strictly speaking, an isomorphism class of objects), and we therefore have an order \leq on subobjects of XX. When =Set\mathcal{E} = Set, this is the power set of XX ordered by inclusion.

Supposing that \mathcal{E} is a topos (though that’s probably overkill), the poset Mono(X)Mono(X) is cartesian closed. The greatest lower bound ABA \wedge B of two subobjects AXA \rightarrowtail X, BXB \rightarrowtail X is their pullback. The greatest element is XXX \rightarrowtail X. Finally, (AB)(A \to B) is defined using exponentials in \mathcal{E}.

Now suppose that XX is an internal group. Consider the statement ‘if x 6=x 15x^6 = x^{15} then x 2=1x^2 = 1’. (This is true in some groups in SetSet, and false in others; it says that there are no elements of order 3.) There is a subobject AA of XX that we might think of as {xX:x 5=x 7}\{x \in X: x^5 = x^7\} — and if =Set\mathcal{E} = Set, it’s exactly that. It’s defined as the equalizer of XX X \rightrightarrows X where the two maps are ‘6th power’ and ‘15th power’ respectively. There is similarly a subobject BB of XX that we might think of as {xX:x 2=1}\{x \in X: x^2 = 1\}. Then there’s a further subobject (AB)(A \to B).

By the lemma,

ABA \leq B if and only if X(AB)X \leq (A \to B) if and only if (AB)=X(A \to B) = X.

Suppose I’ve discovered that XX does have these equivalent properties. I might communicate this to you by saying:

Dude, ABA \leq B!

A logician might communicate the same thing by saying:

Dude, xAxBx \in A \vdash x \in B!

Both those statements are about entailment. But using the result of the lemma, it would be equivalent to say:

Dude, (AB)=X(A \to B) = X!

A logician might communicate this by saying:

Dude, x(AB)\vdash x \in (A \to B)!

This statement contains both entailment and implication. The entailment is the \vdash, and the implication is the \to. It’s not going to be possible to say something with implication alone, because (AB)(A \to B) is not a mathematical statement: it’s an object of \mathcal{E}. You can no more say, ‘Dude, (AB)(A \to B)!’ than ‘Dude, \mathbb{R}!’

The moral of all this: a statement in the metalogic is something that can be prefaced with the word ‘Dude’.

Posted by: Tom Leinster on January 5, 2011 8:55 AM | Permalink | Reply to this

Re: Entailment and Implication

Dude, whereas you equivalently might, or might not, say,

Dude, xAxBx\in A \thickspace \vdash\thickspace x\in B!

it is unlikely that many mathematicians would. They are more likely to say one of the following.

  • Dude, if xAx\in A then xBx\in B!
  • Dude, xAx\in A implies xBx\in B!
  • Dude, if xAx\in A then I can prove xBx\in B!

The point being here that the mathematician would speak in some sort of natural language. So what is the difference between the above statements and the statements you made?

Posted by: Simon Willerton on January 5, 2011 3:07 PM | Permalink | Reply to this

Re: Entailment and Implication

Right: I said explicitly “I wouldn’t actually say this, but it does mean the same thing”. Of course hardly any mathematicians speak that way.

There is no difference between “every element of AA is an element of BB” (one of my first three statements) and your statements “if xAx \in A then xBx \in B” and “xAx \in A implies xBx \in B”. It’s just a matter of idiom. “If xAx \in A then I can prove xBx \in B” is an assertion about your personal abilities, so that’s different.

I’m puzzled that you say “The point being here that mathematicians would speak in some sort of natural language”, because I thought I’d just made that point, very visibly and in some detail, in the comment you’re replying to.

Posted by: Tom Leinster on January 6, 2011 12:11 AM | Permalink | Reply to this

Re: Entailment and Implication

All this “dude” talk inevitably reminds me of this bit from Rob Schneider (of Saturday Night Live fame).

I’m not sure where is best to comment on “false entails true”, so I’ll say it here. You can think of truth values as corresponding to subsets of 11. Then “false entails true” has exactly the same content as

1\emptyset \subseteq 1

which is probably easier to grasp because it’s more “visual”.

With regard to the structure of logic, one could think of a truth value as a predicate of “empty type” (the empty type being 11). A general predicate gets interpreted as a subset of some type XX (where the variables are interpreted), so the picture is that there is some base category of types, and a fiber above each type XX consisting of predicates or relations of that type. On each individual fiber there are propositional operators (conjunction, disjunction, etc.), and the logical quantifiers let you pass between fibers over different types. A fully quantified predicate (e.g., a formula where all variables are bound) lands one in the fiber over 11, in other words gives a truth value.

I find this picture of first-order logic helpful: given a first-order theory, there is a base cartesian category of types and operations between types; this is where terms of a theory live. Then, there is a total category consisting of formulas or predicates of the theory, living in fibers over each type. The first-order theory is then given as an appropriate Boolean-algebra or Heyting-algebra hyperdoctrine structure on the fibration.

Posted by: Todd Trimble on January 5, 2011 8:24 PM | Permalink | Reply to this

Re: Entailment and Implication

What does it really mean that, for instance, false entails true?

Syntactically, regarding entailments from falsefalse, you might want to look up ex falso quodlibet, which Wikipedia calls the Principle of explosion. If you take falsefalse to be equivalent to ψ¬ψ\psi \wedge \not \psi for some proposition ψ\psi, it’s not hard to see how any proposition can be derived from it.

Relevance logicians don’t buy into this, however, since they believe that ψ\psi would have to be relevant to the derived proposition.

Semantically, it says that any model in which false holds, true holds there too. But false never holds in a model, so this is vacuously the case.

Posted by: David Corfield on January 5, 2011 9:44 AM | Permalink | Reply to this

Re: Entailment and Implication

I used to be very skeptical of relevance logic at first, but lately I’ve come around. The reason I was skeptical was that accepting the standard elimination rule for disjunction but not for falsehood seemed very weird to me. It’s like accepting binary coproducts but not a terminal object. But more recently I’ve realized there are sometimes there are good reasons to do so.

Concretely, consider the category of sets and injections. This category is monoidal with the set-theoretic product, which is no longer Cartesian in this category, and it also has all finite coproducts. However, it is not a monoidal closed category. Now, if we restrict to nonempty sets and injections, the category becomes monoidal closed, but while it retains non-zero coproducts it loses its initial object.

So in this little example, we have a choice between monoidal closure and having an initial object, and obviously we can’t always favor one of those two choices.

The product and closure in this setup is more like linear logic than relevance logic, of course, but I think it still demonstrates that my bias against categories with coproducts but no initial object was just a bias, and not mathematically grounded.

Posted by: Neel Krishnaswami on January 5, 2011 11:49 AM | Permalink | Reply to this

Re: Entailment and Implication

Neel wrote:

consider the category of sets and injections … it also has all finite coproducts.

Really? I don’t have a proof that it doesn’t, but the obvious thing doesn’t work. E.g. writing nn for an nn-element set, the coproduct of 11 and 11 can’t be 22, since there are four pairs of injections (12,12)(1 \to 2, 1 \to 2) but only two injections 222 \to 2.

Posted by: Tom Leinster on January 5, 2011 12:10 PM | Permalink | Reply to this

Re: Entailment and Implication

OK, now I have a proof that the category of sets and injections doesn’t have finite coproducts. Indeed, there is no coproduct 1+11 + 1.

For suppose that there were. Since there is precisely one pair of injections (11,11)(1 \to 1, 1 \to 1), there must be precisely one injection 1+111 + 1 \to 1. Hence 1+1=11 + 1 = 1.

Now there are precisely four pairs of injections (12,12)(1 \to 2, 1 \to 2), but only two injections 1+1=121 + 1 = 1 \to 2.

But perhaps this doesn’t affect your point.

Posted by: Tom Leinster on January 5, 2011 12:16 PM | Permalink | Reply to this

Re: Entailment and Implication

Yes, you’re right, and I wasn’t thinking about this properly. 1+1 isn’t even a weak coproduct, and I think this does fatally break my point.

We have the weaker property that sets and injections is monoidal and has an initial object (the emtpy set), and nonempty sets and injections is monoidal closed and doesn’t have an initial object.

However, this loses the connection to relevance logic since the point of RL is that we have a true binary coproduct but no initial object. I don’t know any simple categories like this that are more than rephrasings of this point (eg, the category of nonempty sets and functions).

Posted by: Neel Krishnaswami on January 5, 2011 1:33 PM | Permalink | Reply to this

Re: Entailment and Implication

This, together with Tom’s comment involving the expression “1+1=11+1=1” remind me of the following story I’ve heard about Bertrand Russell, though I have no idea whether it happened or not.

Russell was claiming that from the assumption of a false statement he could prove anything was true. He was then challenged to demonstrate that he was the Pope using the fact that 1+1=11+1=1. He rose to the challenge by asserting “I am one; the Pope is one. Therefore the Pope and I are one.”

Posted by: Simon Willerton on January 5, 2011 3:20 PM | Permalink | Reply to this

Re: Entailment and Implication

Perhaps, rather than trying to simplify things, it’d be easier to understand if you make them more complex. Consider, for example, coming at it from the direction of CCCs as lambda calculi. But what if our lambda calculus is so simple that we can’t define higher-order functions? If we can’t have HOFs (i.e., functions cannot take functions as arguments) then that means the category doesn’t have exponentials. The calculus does still have functions, but it doesn’t have any way to pass them around; the functions are, in some sense, outside of the calculus itself. They’re not first-class.

The same thing holds for logics. Just as all term calculi have functions, all logics have some notion of validity (i.e., entailment). But that doesn’t mean that all logics have a way of internalizing it. Having implication in a logic is like having lambdas in a term calculus: it gives a language-internal mechanism for abstraction. In calculi we’re abstracting over subterms, in logics we’re abstracting over whether some given proposition is valid— but the important thing is that we’re doing this within the language, not just in our own thoughts. Thus, logics with implication can pass around these abstracted terms and can ask things like whether an abstract term is valid (instead of just asking whether all concretizations of it are valid).

In a way, the difference between entailment and implication is like the difference between conditionals and hypotheticals (some languages make a better distinction between those than English does).

Posted by: wren ng thornton on January 5, 2011 2:12 PM | Permalink | Reply to this

Re: Entailment and Implication

Tom says:

Mike said, in part, that implication is something inside the logic and entailment is something outside. Personally, I don’t tend to find phrases such as ‘this is a statement in the logic, but that’s a statement in the metalogic’, or ‘this is inside the formal system, but that’s outside’, very compelling.

Simon says:

I don’t know how to think of this. Maybe I should just take this as the definition and not worry about how to think of it. But this is not how most mathematicians think of implication. “Implication is an internal hom” would not strike a chord with most mathematicians.

I think you are struggling with the question of primordial foundations: what is the absolute meta-context of logic and math, and what is internal math and internal reasoning?

We had a discussion about this on the nnForum recently, and it seems that Todd and Mike gave us the complete answer. Todd has been promising to write it all up in an nnLab page, since. I don’t know how far this has progressed, but I am still looking forward to it.

If I may try to summarize the situation, unqualified as I am:

In the beginning there is the FOL : first order logic. We all assume that we know what it means to reason in first order logic, we teach it to kids and – importantly – we can teach it to computers and see them implement it automatically and in reality . This is the outermost ambient meta logic.

Once we have this, we formulate theories in terms of first order logic. For instance the theory of sets . We may (and ultimately should) do this fully mechanically, as if we were programming a computer that reads in first order logic. Todd kindly went through the very instructive trouble of spelling out what this means in full detail at fully formal ETCS. This sets up the topos of sets SETSET in the outermost ambient meta-logic.

Once we have this, we may decide to jump into SETSET and now reason inside this. I think it is very analogous to starting with assembly language and writing a C-compiler in terms of it. Once you have done it, you may choose to write all further programs not in assembly anymore, but in C, and let the compiler turn it back into assembly code.

Once we are in the (very large) topos SETSET, we may define further frames, categories, toposes internal to it. And then reason internal to these, if we find this higher order language useful for our applications.

This gives us a hierarchy of internalizations. At each step there is a notion of implication (internal to that hierarchy level) and entailment (internal to the previous, ambient internalization level). This hierarchy has a root: first order logic. The outermost meta logic is that of FOL., and here we only have entailment. But nobody wants to spend his or her whole life here, just as nobody wants to write code in assembly forever. So as need be, we pass to higher level internalizations, and each time that we make such a step the internal logic of the previous step becomes the meta-logic of the next one, the implication of the previous step becomes the entailment of the next one.

Posted by: Urs Schreiber on January 5, 2011 3:19 PM | Permalink | Reply to this

Re: Entailment and Implication

Urs said:

I think you are struggling with the question of primordial foundations: what is the absolute meta-context of logic and math, and what is internal math and internal reasoning?

Quite possibly. Mathematical logic is supposed to model the logic we use in natural language, the logic we perceive in the world around us. And of course it is difficult to talk about the mathematical model without using natural language.

The wikipedia article on entailment gives the impression that the words “implication” and “entailment” can have rather different meanings depending on the context. Perhaps my confusion is connected to the fact that these terms have very specific meanings in categorical logic which may or may not be related to how they are used in natural language.

Posted by: Simon Willerton on January 5, 2011 4:45 PM | Permalink | Reply to this

Re: Entailment and Implication

Willerton - Incidentally, why is it that the logic of natural language which models the world around us is the way it is? It could have been any other logic, why has that particular choice been made?

Posted by: jamal on January 6, 2011 1:05 AM | Permalink | Reply to this

Re: Entailment and Implication

Incidentally, why is it that the logic of natural language which models the world around us is the way it is? It could have been any other logic, why has that particular choice been made?

I don’t really know. Natural language evolved to represent our everyday experience, I suppose. Maybe we need a philosopher to answer that kind of question. Or perhaps some kind of linguist. The logic of natural language is extremely important in computational linguistics, where you really do want to translate some text (say from a webpage) into formal logic, and so make formal deductions.

One could ask, do languages differ much in their logic? I don’t precisely know what I mean by that question, but I remember reading Blackfoot Physics by F. David Peat many years ago, and seem to recall that the claim was that the languages of some native American tribes were considerably different in structure to Indo-European languages to give their science a considerably different flavour – I can’t remember any details though.

Anyway, I’m digressing and know little about these things, despite being very interested in them.

Posted by: Simon Willerton on January 7, 2011 11:34 AM | Permalink | Reply to this

Re: Entailment and Implication

I’m not sure what is being meant here by “the logic of natural language”, neither the ‘the’ nor the ‘logic’. On the other hand, I do have a book on my shelves which has that phrase as its title. Fred Sommers’ thesis there is that your comment above

Mathematical logic is supposed to model the logic we use in natural language

is hopelessly wrong, and that a pre-Fregean treatment in terms of subjects, predicates and syllogistic reasoning is much more adequate to natural language than quantification-theoretic first-order logic. He observes in the introduction that Frege would not have disagreed, but that, unlike Sommers, he would have added

so much the worse for natural language.

Posted by: David Corfield on January 7, 2011 12:34 PM | Permalink | Reply to this

Re: Entailment and Implication

There’s another factor that probably shaped whatever logic natural language embodies, namely human working memory. Psychologists who study these kind of things apparently conclude that – unless you either really slow down and carefully compartmentalise the argument – some of the logical fallacies people quickly making decisions keeping everything in memory (ie, the way people do logic in real life) fall into arise from the very limited working memory being exceeded. The theory is apparently that when that happens people discard details, which often leads to an erroneous belief that affects the output.

So in trying to understand where the logic of natural languages comes from, one would expect selection pressure for only considerations that affect those logical problems which fit in working memory anyway. (Theres no evolutionary pressure from problems that humans can’t accurately process anyway.) Of course one could argue that if a more inovlved logic was better humans would have evolved bigger working memory, but one might expect counterpressures from metabolic requirements and the kind of problems encountered in practice.

Posted by: dave tweed on January 7, 2011 3:21 PM | Permalink | Reply to this

Re: Entailment and Implication

Urs wrote:

We had a discussion about this on the nForum recently, and it seems that Todd and Mike gave us the complete answer. Todd has been promising to write it all up in an nLab page, since. I don’t know how far this has progressed, but I am still looking forward to it.

I haven’t forgotten my promise, honest. But the work I had done got destroyed in an nLab snafu, and I haven’t gotten around to recreating it.

This is actually one of several “foundational” things I’m currently working on for the nLab but haven’t publicized. One is the article on [[algebraic theory]] which I’m not linking to yet because it’s still very much under construction, but it’s a massive reorganization of the article which had been going under that name. A companion article to that is one on free cartesian category, which is also under construction. And all this is supposed to feed parts of the planned article on first-order logic.

I’m sorry this is taking so long.

I like how Urs has put it. A lot of people struggle with the feeling that there is some kind of circularity built into discussions of logic: to say what logic is, we seem to need some baseline mathematical structures lying around, but to deal with these structures, we need to understand logic! [It’s like what I call the paradox of the dictionary: that either we accept that the dictionary is hopelessly circular, or we accept that some words are undefined, and therefore vague and imprecise.]

I’ve found the metaphor of the computer technician setting out to teach a computer “how to reason” very helpful in breaking out of this circularity. I put it this way in response to a Math Overflow query:

Logical foundations avoids this paradox ultimately by being concrete. We may put it this way: logic at the primary level consists of instructions for dealing with formal linguistic items, but the concrete actions for executing those instructions (electrons moving through logic gates, a person unconsciously making an inference in response to a situation) are not themselves linguistic items, not of the language. They are nevertheless as precise as one could wish.

We emphasize this point because in our descriptions below, we obviously must use language to describe logic, and some of this language will look just like the formal mathematics that logic is supposed to be prior to. Nevertheless, the apparent circularity should be considered spurious: it is assumed that the programmer who reads an instruction such as “concatenate a list of lists into a master list” does not need to have mathematical language to formalize this, but will be able to translate this directly into actions performed in the real world. However, at the same time, the mathematically literate reader may like having a mathematical meta-layer in which to understand the instructions. The presence of this meta-level should not be a source of confusion, leading one to think we are pulling a circular “fast one”.

In other words, to break out of the circularity, it is enough to observe that computers can be programmed to recognize certain strings as well-formed terms or formulas (of a given axiomatic theory), and how to recognize inferences as valid. It’s not as if there needs to be some background theory, or the prior existence of a completed or actual infinity of all expressions which might come up, sitting inside the computer. The computer is programmed to handle finite parts of the theory correctly, and the same applies to human users of a theory (although we say “taught”, not “programmed”).

Back to the discussion. It’s true that the distinction between the words “entailment” and “implication” is not always carefully maintained, even by logicians, but I also agree with the stance that the distinction should be that “entailment” is a relation between formulas, whereas “implication” is an operator on formulas. This can be categorified by thinking of entailments as arrows, and “implication” as an internal-hom functor in two arguments, one covariant and the other contravariant.

Posted by: Todd Trimble on January 5, 2011 6:37 PM | Permalink | Reply to this

Re: Entailment and Implication

The computer is programmed to handle finite parts of the theory correctly…

But there’s also the thought that real computers are programmed to be productive, that is, to be able to get working on the initial segment of a potentially infinite stream of inputs, as Dan Piponi describes here. Of course, at any stage it only deals with a finite input, but that’s not in the sense a well-formed formula of finite length to be understood as recursively constructed.

Is there any reason we opt for the ‘algebraic’ rather than the ‘coalgebraic’ by taking first-order logic as primordial?

Posted by: David Corfield on January 6, 2011 3:52 PM | Permalink | Reply to this

Re: Entailment and Implication

Why, apart from historical happenstance of tradition, should we think of FOL as the outermost level, or even particularly distinguished? In the same way in which we can formalize higher-order logic by giving the topos axioms or such things, we could formalize the theory of first-order logic by giving the logos axioms or such things. Which we don’t need full first-order logic to be able to give; lex logic suffices.

And even lex logic needn’t be the outermost level of our stripping back the layers. Ultimately, the outermost level is just “Descriptions of rules in ordinary language in a way we all perfectly well understand how to carry out”, and necessarily pre-formal. This is what humans come pre-equipped with; in the computer analogy, these are just programs already in the native machine code of the computer.

Underneath this pre-formal ordinary language, we can implement all kinds of formal logical systems, but there’s no particular privileged status for FOL, so far as I can see; it’s just one out of many useful formal systems whose rules we can describe in our pre-existing ordinary language in a way we all perfectly well understand how to carry out.

Posted by: Sridhar Ramesh on January 8, 2011 7:09 AM | Permalink | Reply to this

Re: Entailment and Implication

“Underneath this pre-formal ordinary language, …”

Er, or on top of, or within, or whatever, depending on how you choose to spatially visualize levels of abstraction. :)

Posted by: Sridhar Ramesh on January 8, 2011 7:12 AM | Permalink | Reply to this

Re: Entailment and Implication

I tend to view first-order logic as the strongest and most natural logic that is still purely logic. Sure, you can define first-order logic in a weaker system, but I think that we think in at least first-order logic. Certainly we talk in it. You can write a C compiler in assembly language, but most of us don’t think about algorithms in assembly language; our mental pseudocode is much more like C.

On the other hand I don’t really regard “higher-order logic” as “logic” in the same foundational sense as first-order logic, since it inevitably starts bringing in questions that I would regard as belonging more to set theory.

Obviously that is subjective and disputable, but I think that’s some of why I tend to view FOL as distinguished and important.

Posted by: Mike Shulman on January 11, 2011 4:10 AM | Permalink | Reply to this

Re: Entailment and Implication

The difference lies within the agent doing the reasoning. “a entails b” means, “If a is true, then b must be true under the same model”. “a implies b” means, “I, the agent, infer that b is true given that a is true.” If the agent reasons perfectly, it will infer only what is entailed and will infer everything that is entailed, and the set of entailed facts and implied facts will be equivalent.

Posted by: Tara on April 3, 2015 11:24 PM | Permalink | Reply to this

Post a New Comment