Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

February 11, 2020

Types in Natural Language

Posted by David Corfield

One hoped for effect of my book is that some day philosophers will look to the resources of type theory rather than the standard (untyped) first-order formalisms that are the common currency at the moment. Having been taught first-order logic in a mathematical fashion on my Masters degree many years ago, it struck me how ill-suited it was to represent ordinary language. And yet still our undergraduates are asked to translate from natural language into first-order logic, e.g. Oxford philosophers here. This amusing attempt to translate famous quotations rather proves the point.

To the extent that first-order logic works here, it tends to lean heavily on the supply of a reasonable domain. But when quantification occurs over a variety of domains, as in

Everyone has at some time seen some event that shocked them,

we are asked to imagine some vast pool of individuals to pull out variously people, times and events. Small wonder computer science has looked to control programs via the discipline of types. Just as we want a person in response to Who?, and a place in response to Where?, programs need to compute with terms of the right type.

Type theories come with different degrees of sophistication. I’m advocating dependent type theory. In the Preface to his book, Type-theoretic Grammar (OUP, 1994), Aarne Ranta recounts how the idea of studying natural language in constructive (dependent) type theory occurred to him in 1986:

In Stockholm, when I first discussed the project with Per Martin-Löf, he said that he had designed type theory for mathematics, and than natural language is something else. I said that similar work had been done within predicate calculus, which is just a part of type theory, to which he replied that he found it equally problematic. But his general attitude was far from discouraging: it was more that he was so serious about natural language and saw the problems of my enterprise more clearly than I, who had already assumed the point of view of logical semantics. His criticism was penetrating but patient, and he was generous in telling me about his own ideas. So we gradually developed a view that satisfied both of us, that formal grammar begins with what is well understood formally, and then tries to see how this formal structure is manifested in natural language, instead of starting with natural language in all it unlimitedness and trying to force it into some given formalism.

Ranta achieves a remarkable amount in this book, and yet I think it didn’t receive as wide recognition as it deserved, although there are a few people working in this tradition today.

It would be interesting to see how different languages lexically code for types, such as where the Japanese affix different endings to their numerals when counting people, long objects, small objects, flat objects, small animals, larger animals, mechanical objects, and so on. Anyone have similar examples from other languages?

Posted at February 11, 2020 10:44 AM UTC

TrackBack URL for this Entry:

20 Comments & 0 Trackbacks

Re: Types in Natural Language

It is not central to predicate logic that there is only one sort. But pedagogically it simplifies things and it has unfortunately become rather universal. It would be better to compare type theory to multi-sorted predicate logic.

Posted by: Jonathan Kirby on February 11, 2020 11:13 AM | Permalink | Reply to this

Re: Types in Natural Language

It would be better to compare type theory to multi-sorted predicate logic.

Yes, and I will do so, but let’s just note here that the use of the untyped (or uni-typed) variety does more than provide for pedagogical simplicity. It informs highly-regarded research articles, such as Timothy Williamson’s Everything.

Posted by: David Corfield on February 11, 2020 11:47 AM | Permalink | Reply to this

Re: Types in Natural Language

I suppose you have already considered Chomsky’s Colorless green ideas sleep furiously to explain that a “category error” is a type error, rather than a “semantic error” as usually claimed, and thus ultimately a syntactical error (in a non-context-free grammar)?

Posted by: Martin Escardo on February 11, 2020 6:17 PM | Permalink | Reply to this

Re: Types in Natural Language

I’ve never quite understood why this phrase is considered to be non-sensical. Odd, yes. But meaningless? Certainly not! I can imagine this (or perhaps more sonorous sentences) appearing in a poem, or an allegory, or some other meaningful but non-referential act of language.

Analyzing it as a type error does seem to make sense. Jokes and poems often trade on type errors for aesthetic effect.

Posted by: David Jaz Myers on February 11, 2020 7:07 PM | Permalink | Reply to this

Re: Types in Natural Language

The concept of green doesn’t apply to ideas. And concept of sleeping doesn’t apply do ideas either. Grammatically, this sentence is an assertion and so it should have a truth value. But this sentence is not true or false. Not because we can’t decide which one it is, but simply because doesn’t make sense. Saying that the sentence is true (or false) is more non-sensical than the sentence itself!

Posted by: Martin Escardo on February 11, 2020 7:40 PM | Permalink | Reply to this

Re: Types in Natural Language

I have some ideas about how to live in a more ecologically responsible way. (Green ideas.)

Some of those ideas are interesting, some are not. (The latter are colorless green ideas.)

I try to put the uninteresting ones out of my mind so I can focus on other things. (Let them sleep.)

When I do that, those uninteresting ideas nevertheless keep intruding on my thoughts, as if they are lying restlessly in my mind. (Those colorless green ideas sleep furiously.)

I imagine any grammatical sentence could be put through a similar treatment.

Posted by: Mark Meckes on February 11, 2020 8:25 PM | Permalink | Reply to this

Re: Types in Natural Language

Yes, Martin, I agree that thinking in terms of typing errors is a good way to go.

At the same time, language is a dynamic tool, and terms are redeployed, as in Mark’s interpretation of ‘green’ as ecologically responsible. This latter case can evidently be used in a truth claim, such as ‘Planting a million trees here is a green project’.

While Chomsky’s sentence can be seen as making a modicum of sense, we shouldn’t let this distract us from the tight typing that occurs in natural language. Whether it makes sense to count a collection is often indicative. If I have just my tree-planting plan and a single pea on my desk, I’m hardly likely to say that there are two green things there.

The philosopher Gilbert Ryle, influential for his writings on category mistakes, looks here to the counting test:

A man would be thought to be making a poor joke who said that three things are now rising, namely the tide, hopes and the average age of death. It would be just as good or bad a joke to say that there exist prime numbers and Wednesdays and public opinions and navies; or that there exist both minds and bodies. (The concept of mind, 1949, p. 23)

Looking now at the SEP article by Ofra Magidor on category mistakes, I see it says

Influenced by the work of Edmund Husserl, Ryle argued that category mistakes were the key to delineating ontological categories: the fact that ‘Saturday is in bed’ is a category mistake while ‘Gilbert Ryle is in bed’ is not, shows that Saturday and Ryle belong to different ontological categories. Moreover, Ryle maintained that distinguishing between categories was the central task of philosophy: “The matter is of some importance, for not only is it the case that category-propositions (namely assertions that terms belong to certain categories or types), are always philosopher’s propositions, but, I believe, the converse is also true. So we are in the dark about the nature of philosophical problems and methods if we are in the dark about types and categories.” (Ryle 1938, 189)

That’s going to become my new slogan:

We are in the dark about the nature of philosophical problems and methods if we are in the dark about types and categories.

Posted by: David Corfield on February 12, 2020 9:08 AM | Permalink | Reply to this

Re: Types in Natural Language

I think that this connects more broadly to a question of explanatory priority in the philosophy of language – whether the meaningfulness of an assertion is to be explained in terms of the capacity for human beings to grasp relations of denotation between expressions and truth-values/objects, or in terms of their recognising certain rules of usage (and those capacities/rules should be amenable to being cashed out somehow – maybe biologically or socially respectively).

Maybe this is all implicit in Martin’s comment! but Chomsky himself has long been sceptical of the role of semantics in the study of language-use:

“As for semantics, insofar as we understand language use, the argument for a reference-based semantics (apart from an internalist syntactic version) seems to me to be weak. It is possible that natural language has only syntax and pragmatics; it has a ‘semantics’ only in the sense of ‘the study of how this instrument, whose formal structure and potentialities of expression are the subject of syntactic investigation, is actually put to use in a speech community,’ to quote the earliest formulation in generative grammar 40 years ago, influenced by Wittgenstein, Austin, and others.” New Horizons in the Study of Language (2000), p.132

Posted by: gavin on February 12, 2020 9:58 PM | Permalink | Reply to this

Re: Types in Natural Language

Interesting! I didn’t know Chomsky held such views.

To provide some further context for readers, some philosophers described in your first paragraph take inspiration from the introduction-elimination rule schemes of natural deduction. Meaning is determined by when we are entitled to employ words in sentences and what we are able infer from such sentences.

It’s no surprise, then, that one such philosopher, Robert Brandom, crops up frequently in my book.

Posted by: David Corfield on February 13, 2020 8:41 AM | Permalink | Reply to this

Re: Types in Natural Language

It’s a very famous dispute in linguistics, between the Chomsky school on the one hand, and people using Montague semantics (i.e., type theory) on the other.

Barbara Partee has a really interesting essay, Reflections of a Formal Semanticist, about this and many other things.

IMO, her essay is particularly interesting because she did her PhD at MIT with Chomsky as her PhD supervisor, and then she went on to became one of the leading figures in the formal semantics tradition, and so she is able to speak with authority and sympathy about both sides of this dispute.

You might like this quote:

In particular, Montague’s rich type theory (whose antecedents are acknowledged in his work) made it possible to analyze virtually all of the most basic grammatical relations in terms of function-argument structure. In retrospect we can see that the generative semanticists and other linguists who tried hard to make semantics compositional were hampered by the mismatch between natural language syntactic structure and the structure of first order logic, the only logic most linguists knew.

Or this one:

Lambdas changed my life!

Posted by: Neel Krishnaswami on February 14, 2020 2:39 PM | Permalink | Reply to this

Re: Types in Natural Language

Interesting! Looks like a good read.

I should look more into Montague. From what I’ve seen of his type theory I’d hardly describe it as “rich”, but maybe that’s relative to what went before.

Posted by: David Corfield on February 15, 2020 8:07 AM | Permalink | Reply to this

Re: Types in Natural Language

I believe Meinong’s jungle deserves some exploration.

Posted by: jackjohnson on February 11, 2020 9:18 PM | Permalink | Reply to this

Re: Types in Natural Language

I’ve made that into a link.

To the extent that this concerns ‘possible objects’, such as flying horses, chapter 4 on modalities is the relevant one, which I’ll get to in a later post.

Posted by: David Corfield on February 12, 2020 8:43 AM | Permalink | Reply to this

Re: Types in Natural Language

I mentioned Japanese counting suffixes. I didn’t realise there could be so many, as here.

Posted by: David Corfield on February 12, 2020 1:57 PM | Permalink | Reply to this

Re: Types in Natural Language

While dependent type theory is an improvement over first-order logic, unfortunately both dependent type theory and first-order logic assume that natural languages are static, where in fact, natural languages are dynamic by nature.

Posted by: Madeleine Birchfield on February 13, 2020 1:07 AM | Permalink | Reply to this

Re: Types in Natural Language

I don’t see why we can’t model changes of language use, or do you mean rapid dynamic change?

For the former, at some stage I give a definition of being a metre long in terms of being the same length as the standard metre rule in Paris. Later I change my definition to being equal to the distance light travels in a certain interval of time. Now it becomes a live question as to the length of the obsolete Parisian rule.

The ‘theory’ of ‘dependent type theory’ is confusing. Mike Shulman gave us a great account of how we have to think in layers in What is an n-theory?. First I choose my 3-theory (say the one that describes dependent type 2-theories). Then I choose a specific dependent type 2-theory (selecting which type constructors, etc. to include). Then I specify some types I want in my particular dependent type (1-)theory. I’ll also add in plenty of definitional equalities.

So I may continue to use ‘dependent type theory’, but I have the means to remove types (say, for phlogiston), to add terms (say, Covid-19: Disease), to change definitions (as with the metre), and so on.

Is this not dynamic enough?

Posted by: David Corfield on February 13, 2020 9:04 AM | Permalink | Reply to this

Re: Types in Natural Language

Everything you say may be correct; however, I was thinking more of the holistic spatio-temporal nature of natural language, important for considerations in theoretical historical, sociological, and variational linguistics. The spatial nature of natural language is evident in the dialects of a language based upon regional differences, and the temporal nature given by natural language change. A model of natural language on the national, regional, or global level would necessarily include a field of ‘dependent type theories’, with each ‘dependent type theory’ representing each speaker of the natural language or each dialect, on a 2-sphere manifold representing Earth or a subset thereof, that evolves dynamically according to a set of deterministic or stochastic rules based upon past and present states, just as the weather is represented by a field of temperature/wind velocity/precipitation levels/et cetera on the same 2-sphere manifold representing Earth. So just as knowledge of the real numbers is necessary but not sufficient for modelling temperature on Earth, so are dependent type theories necessary but not sufficient for modelling natural languages, especially in relation to its history and to other natural languages (or dialects), and it is these external structural/systemic rules that are missing.

Posted by: Madeleine Birchfield on February 14, 2020 3:13 AM | Permalink | Reply to this

Re: Types in Natural Language

Doing more than model fragments of a single language is ambitious enough for the moment. There are even things to consider about two people speaking with each other, sharing most concepts, but differing on some, differing perhaps also on the elements of types and their identities. Something I think worth considering is Robert Brandom’s idea that we keep a score of each other’s commitments. We may have made judgements that we know our interlocutor hasn’t. In a sense then we are operating with different type (1)-theories.

Posted by: David Corfield on February 14, 2020 9:07 AM | Permalink | Reply to this

Re: Types in Natural Language

There are many languages which have counters, not just Japanese. One of my favorite examples are the Mayan languages Tzotzil and Tseltal. They’re a favorite because when learning about them my professor told us a joke (in Tzotzil iirc) that starts off “there are three guys walking down the street”— only instead of using the counter for two-legged things, as one should for humans, they instead use the counter for four-legged things. Because of that grammatical ‘error’, the listener is forced to conclude that the three guys must be falling down drunk. Nowhere else in the joke is it indicated that the guys are wasted, but the humor of the joke comes from that implicit understanding; and hence the joke can’t really be translated into languages like English. From a type-theoretic standpoint, one could interpret this as being a sort of implicit coercion to get the types to match up; and from the HoTT standpoint (or any other dependent type theory without UIP) we can see this as an example of the fact that paths may have nontrivial content and so transporting along them is not a no-op.

Linguistically speaking, counter systems can (in many respects) be viewed as an extreme variant of the grammatical gender in IndoEuropean languages. For example, one of the primary uses of gender systems is to have some form of ‘agreement’ that helps clarify things like which noun an adjective modifies. Arguably Japanese counters can be said to serve the same clarifying function. That is, consider a sentence like “I have three cats”. In English we would parse that as having a syntactic structure where “three” is some sort of adjective/modifier that attaches to “cats”, and then the phrase “three cats” is some sort of argument to the verb “have”. However, in Japanese the “three” is not a modifier on “cats”, instead it behaves syntactically more like an adverb/modifier attaching directly to the verb.† Without counters this syntax would very easily lead to confusion, but with counters it’s pretty straightforward (unless someone’s going out of their way trying to be confusing).

Another example of lexically coded type information is grammatical case markers (this is a stock example in the Montagovian tradition). These aren’t types for categorizing objects in the domain of discourse, as you have in the counters example above and in many-sorted logics; instead they’re types for categorizing syntactic/combinatorial potential, that is for coding how/where the pieces of a sentence fit together. Using Japanese again, we can think of verbs as having some number of slots for arguments, and each slot requires a certain type of argument (e.g., subject, direct object, indirect object); the case particles then serve as the mechanism for converting generic nouns into the appropriate syntactic type. This is important because Japanese does not have fixed word order, so we cannot rely on the position of a phrase to determine which slot it goes into (whereas English does use positional arguments, hence doesn’t ‘need’ grammatical case).‡

†: Japanese does have constructs that behave like English, but they’re not the default/simple/standard syntax.

‡: My dissertation formalizes the free-order aspect of this example, since case marking alone doesn’t free one from the sequentiality of categorical grammars. We ultimately end up with something very like operads, however some details are different since we do not have full commutativity but only a limited form of commutativity.

Posted by: wren romano on March 25, 2020 2:56 AM | Permalink | Reply to this

Re: Types in Natural Language

Thanks for this, Wren. Just the kind of thing I was asking for. I remember the Japanese ‘wa’ and ‘ga’ from the little I picked up there, which it seems is far more involved than I thought. And then there’s a host of other particles.

I see elsewhere you say

I think there is a deep connection between natural and programming languages; there’s a reason we call both of them “languages.”

What an excellent project to work out this deep connection. From philosophy, we need many more people with knowledge of type theories/category theory.

Posted by: David Corfield on March 25, 2020 9:23 AM | Permalink | Reply to this

Post a New Comment