## August 30, 2021

### You Could Have Invented De Bruijn Indices

#### Posted by Mike Shulman

One of the most curious things about formal treatments of type theory is the attention paid to dummy variables. The point is simple enough to explain to a calculus student: just as the integrals $\int_0^2 x^3 \, dx$ and $\int_0^2 t^3 \, dt$ are equal, so are the functions $\lambda x. x^3$ and $\lambda t. t^3$ — the name of the variable doesn’t matter. (In fact, the former statement factors through the latter.) But managing this equality (which goes by the highbrow name of “$\alpha$-equivalence”) turns out to be quite fiddly when actually implementing a typechecker, and people have developed all sorts of different-sounding ways to deal with it, with names like De Bruijn indices, De Bruijn levels, locally named terms, locally nameless terms, and so on.

In this post I want to explain what’s going on from a categorical and/or univalent perspective, and why some of these fancy-sounding things are really quite simple. To cut to the punchline, at bottom what’s going on is just different ways to choose a concrete representative of an object that’s determined only up to unique isomorphism.

Let’s consider possibly the simplest case: terms of the lambda-calculus. These are generated inductively by three operations:

• Variables are terms
• If $M$ and $N$ are terms, so is the application $M N$.
• If $M$ is a term and $x$ a variable, then the abstraction $\lambda x. M$ is a term.

A usage of a variable $x$ inside a $\lambda x$ is called bound, while other usages of $x$ are called free. Thus in $\lambda x. y x$ the usage of $x$ is bound but the usage of $y$ is free. Semantically, the free variables must be given meaning in a context such as $y:A, z:B$, while the bound variables are “dummies” that don’t appear in the context.

I just described the traditional approach, where one first defines all the terms at once, then defines what the free variables of a term are. But at the end of the day, what we really want to have is, for each context of (perhaps typed) variables, the set of terms containing those variables free. Or more precisely, containing at most those variables free, since for instance $\lambda x. y x$ makes sense in the context $y:A, z:B$ even though it doesn’t actually use the variable $z$.

Thus, from a typed perspective, it would be better to define the sets of terms indexed by the set of variables in their context. These are called well-scoped terms. Since we are ordinary mathematicians who think structurally, of course it doesn’t matter what these variables “actually are”, so we can just suppose them to be drawn from an abstract set $V$. Now we intend to give an inductive definition of the set $Tm_V$ of “terms with free variables from $V$”, with $V$ an arbitrary set, and the first two clauses are obvious:

• For any set $V$, if $x\in V$, then $v_x \in Tm_V$.
• For any set $V$, if $M\in Tm_V$ and $N\in Tm_V$, then $M N \in Tm_V$.

The third clause is less clear, since in an abstraction $\lambda x.M$ the input term $M$ must have $x$ as a free variable, while $x$ is no longer free in $\lambda x.M$. In other words, $M$ must have one more variable in its context than $\lambda x.M$, so it’s natural to say:

• For any set $V$, if $M\in Tm_{V+1}$, then $\lambda M \in Tm_V$.

Here $V+1$ denotes the coproduct, or disjoint union, of $V$ with a one-element set, denoted $1$. Note that when phrased like this, we don’t need to annotate the $\lambda$ with the variable name $x$; we know what the bound variable is, it’s just the extra element of $V+1$ that comes from $1$.

The point is that, categorically speaking, $V+1$ doesn’t denote a specific set, but only a set determined up to unique isomorphism; and thus we could potentially make different choices about exactly what set to use. But of course, from a structural point of view, the choice doesn’t matter. So to start with, let’s proceed purely structurally. This means that it doesn’t make sense to ask whether the elements of $V+1$ are “the same as” any of the elements of $V$, or the element of $1$. Instead, we have two functions $inl : V\to V+1$ and $inr : 1\to V+1$, and any element of $V+1$ can be written uniquely as either $inl(x)$ for some $x\in V$ or as $inr(\star)$ for $\star$ the element of $1$.

To see what sort of terms we end up with, consider for example the ordinary term “$\lambda x. \lambda y. x$”. This is a closed term with no free variables, so it must live in $Tm_\emptyset$. Its outer operation is an abstraction, so it must be obtained from the constructor $\lambda$ applied to some term in $Tm_{\emptyset+1}$ that corresponds to “$\lambda y.x$”. This latter term is also an abstraction, so it must also be obtained from $\lambda$ applied to some term in $Tm_{(\emptyset+1)+1}$. This last term must be a variable, i.e. an element of $(\emptyset+1)+1$, and there are two such elements: $inr(\star)$ and $inl(inr(\star))$. Since $inr(\star)$ comes from the rightmost copy of $1$, it corresponds to the most recently bound variable (which we were originally writing “$y$”); while $inl(inr(\star))$ comes from the other copy of $1$, hence corresponds to the next most recently bound variable (our “$x$”).

Thus, our term $\lambda x. \lambda y. x$ would be written $\lambda \lambda \, v_{inl(inr(\star))}$, while the similar but distinct term $\lambda x. \lambda y. y$ would be written $\lambda \lambda \, v_{inr(\star)}$. The examples at the next level of binding should make the pattern clear:

• $\lambda x. \lambda y. \lambda z. x$ is $\lambda\lambda\lambda\, v_{inl(inl(inr(\star)))}$.
• $\lambda x. \lambda y. \lambda z. y$ is $\lambda\lambda\lambda\, v_{inl(inr(\star))}$.
• $\lambda x. \lambda y. \lambda z. z$ is $\lambda\lambda\lambda\, v_{inr(\star)}$.

In general, whenever we need to use a bound variable, we “work out” along the structure of the term counting the number of $\lambda$s that we pass until we get to the one where our desired variable was bound. If we passed $n$ such $\lambda$s (not counting the one where our variable was bound), then the representation of our variable is $v_{inl^{n}(inr(\star))}$.

The number $n$ is called the De Bruijn index of our variable. Thus, we see that De Bruijn indices are really the most natural representation of bound variables from a structural point of view. Usually people just write “$n$” instead of $v_{inl^{n}(inr(\star))}$, so our first two examples would be $\lambda\lambda 1$ and $\lambda\lambda 0$. As for the free variables, when referring to an element $x\in V$ inside a term in $Tm_V$, we have to represent it by $inr^n(x)$ where $n$ is the number of $\lambda$s inside which the reference occurs; thus $\lambda y.\lambda z. x$ would be $\lambda\lambda\, v_{inr(inr(x))}$.

Other representations of variable binding arise by working non structurally, allowing ourselves to make more specific choices of “$V+1$”. Specifically, suppose we fix a particular infinite ambient set $W$, a subset $C \subseteq P W$ of its powerset containing at least one element of each finite cardinality, and an operation $suc : C\to C$ together with a family of functions $inl: V\to suc(V)$ exhibiting each $suc(V)$ as a coproduct of $V$ with a one-element set. Then we can similarly define the family of sets of terms $Tm_V$, for all $V\in C$, by analogous clauses:

• For any $V\in C$, if $x\in V$, then $v_x \in Tm_V$.
• For any $V\in C$, if $M\in Tm_V$ and $N\in Tm_V$, then $M N \in Tm_V$.
• For any $V\in C$, if $M\in Tm_{suc(V)}$, then $\lambda M \in Tm_V$.

Some popular choies of $W, C, suc, inl$ include the following.

1. Let $W=\mathbb{N}$, with $C$ consisting of the initial segments $[n] = \{0,1,2,\dots, n-1\}$, and $suc([n]) = [n+1]$ and $inl$ the obvious inclusion, $inl(k)=k$. This representation is called De Bruijn levels: in a term without free variables, each variable is identified by the number of $\lambda$s we have to cross to get to its binder, counting in from the outside of the term rather than out from the variable usage. Thus, for instance, $\lambda x. \lambda y. x$ is $\lambda\lambda v_0$ while $\lambda x. \lambda y. y$ is $\lambda\lambda v_1$. When there are free variables, they are given an ordering and identified by their position in that ordering, with bound variables numbered after the free ones. Note that unlike De Bruijn indices, with De Bruijn levels each variable has the same representation regardless of where it is used in the term.

2. Let $W,C,suc$ be as before, but now define $inl : [n] \to [n+1]$ by $inl(k) = k+1$. This yields De Bruijn indices again, with the free variables ordered and numbered counting out from the term, i.e. coming after all the bound ones.

3. Fix an infinite set $F$ that is disjoint from $\mathbb{N}$, let $W=F \cup \mathbb{N}$, and let $C$ consist of all finite subsets of $W$ such that $V\cap \mathbb{N}$ is an initial segment. If $V = V_F \cup [n]$, with $[n] = V\cap \mathbb{N}$, then define $suc(V) = V_F \cup [n+1]$, with $inl$ acting as the identity on $V_F$ and as $inl(k)=k+1$ on $[n]$. If we consider only the terms whose free variables are subset of $F$, this yields a representation known as locally nameless, where bound variables are given De Bruijn indices but free ones are named by an element of some arbitrary set $F$. One popular choice is to take $F=\mathbb{N}$ as well, with $V_F$ required to be an initial segment too; then the free variables are represented as De Bruijn levels.

All of these representations are isomorphic to each other, simply because their only difference is that they make different choices about how to “materially” represent the structural operation $V+1$. (More precisely, given a bijection between a set $V\in C$ in one representation and a set $V'\in C'$ in another representation, there is an induced bijection $Tm_V \cong Tm'_{V'}$.) In particular, none of these representations need a separate notion of “$\alpha$-equivalence” to identify terms by “renaming bound variables”, because they all have a unique way to name bound variables.

But what about ordinary “named variables”, where we do have a choice about what bound variable names to use? One way to obtain these terms in a well-scoped way is with the following inductive definition:

• For any set $V$, if $x\in V$, then $v_x \in Tm_V$.
• For any set $V$, if $M\in Tm_V$ and $N\in Tm_V$, then $M N \in Tm_V$.
• For any set $V$ and element $x\notin V$, if $M\in Tm_{V\cup \{x\}}$, then $\lambda x. M \in Tm_V$.

Note that since there is a choice of $x$ in the third clause, we now have to notate the variable $x$ in the term $\lambda x.M$, as expected. This choice also means that now we are no longer defining the same family of sets, but will have to quotient by $\alpha$-equivalence to obtain something isomorphic. (We could also achieve that from the start by making this a higher inductive definition, with a fourth constructor saying that $\lambda x.M = \lambda y.M$ whenever $x,y\notin V$.)

The requirement that $x\notin V$ forbids “variable shadowing”. That is, if $x$ is already in scope at the point of a $\lambda$-abstraction, either as a free variable or as a variable bound by a containing $\lambda$, we are not allowed to re-bind it again: we can’t write $\lambda x. \lambda x. x$. I believe this is roughly what’s known as the Barendregt convention for named variables.

Another approach to named variables instead allows re-use of names, where inner binders “shadow” outer ones; thus $\lambda x. \lambda x. x$ would be $\alpha$-equivalent to $\lambda x. \lambda y. y$, whereas inside the inner $M$ of a term $\lambda x. \lambda x. M$ there is no way to refer to the outer $x$ any more. We could achieve something like this by simply dropping the requirement that $x\notin V$:

• For any set $V$ and element $x$, if $M\in Tm_{V\cup \{x\}}$, then $\lambda M \in Tm_V$.

Then if it happens that $x\in V$ already, we have $V \cup \{x\} = V$, so the inner term $M$ has no more variable names to refer to than it used to. But the fact that the cardinality of the context sometimes doesn’t go up by one when we pass across a binder is kind of icky. I haven’t seen anyone really try to work with well-scoped terms like this, but it seems likely to cause all sorts of problems, in particular in defining and quotienting by $\alpha$-equivalence to obtain something isomorphic to the simpler representations.

Another approach to variable shadowing would be to mix names with De Bruijn indices: given a set $F$ of variable names, let $W = F \times \mathbb{N}$ and $C$ the set of finite subsets of $W$ whose fiber over each point of $F$ is an initial segment of $\mathbb{N}$. Then given $V\in C$ and $x\in F$, if the fiber of $V$ over $x$ is $\{x\} \times [n]$, let $V + x$ be $V \cup (x,n)$, and define $inl : V \to V+x$ by $inl(x,k) = (x,k+1)$ and the identity on all other fibers. Now the third clause becomes

• For any $V\in C$ and $x\in F$, if $M\in Tm_{V+x}$, then $\lambda x. M \in Tm_V$.

Now we still only need to decorate the binder $\lambda$ with the element $x$ of $F$ (rather than an element of $W = F\times \mathbb{N}$), and if we identify $x\in F$ with $(x,0)\in W$ we can use named variables as usual, with shadowing: the term $\lambda x. \lambda x. x$ means $\lambda x. \lambda x. v_{(x,0)}$, where $v_{(x,0)}$ refers to the innermost $\lambda x$. But shadowed variables are still available by incrementing the index: in $\lambda x. \lambda x. v_{(x,1)}$ the variable refers to the outer binder.

I doubt anyone would use this representation in practice: from a formal point of view, it seems likely to have all the drawbacks of both named variables and De Bruijn indices. However, I think it’s a nice way to understand what “named variables” are actually doing, and how they relate to things like De Bruijn indices, De Bruijn levels, locally nameless variables, and so on. (This discussion also suggests that in principle, it should be possible to abstract over a “method of handling variables”; a few years ago I wrote down some experiments in this direction here, but never pursued it further.)

Anyway, the point I wanted to make is that all these weird-looking things are “really just” different representations of an invariant structural operation. This doesn’t make them any less important as far as implementation goes: when actually coding things in a computer, we often do need to choose particular representations. But maybe it makes them a bit less scary to a category theorist.

Posted at August 30, 2021 9:23 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3346

### Re: You Could Have Invented De Bruijn Indices

What does the notation v_x mean? Is it just notation for “the variable called x”? So in the “traditional” formulas it would just be written x?

Posted by: Noah Snyder on August 30, 2021 11:10 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Yes, that’s right. Just like the elements of a coproduct $X+Y$ can’t, structurally, be identified literally with elements of $X$ or $Y$ but have tags $inl$ and $inr$ on them, the elements of an inductively defined type like $Tm_V$ all have tags on them, like $\lambda$ and $v$. Application $M N$ should really have a tag too, like $app(M,N)$; I’m not sure why I felt the need to tag the variables explicitly but not application! Sorry about that.

Posted by: Mike Shulman on August 30, 2021 11:30 PM | Permalink | Reply to this

You mentioned not seeing anyone allow shadowed variables, but Coq allows fun x : nat => fun x : nat => x. You were of course talking about the internal representation, so I’ll mention that Python also allows it, as lambda x: lambda x: x, and that’s probably how it’s stored as well.

PS: You wrote $\lambda\lambda v_{inl(inr(\ast))}$ at one spot where you meant $\lambda \lambda v_{inr(\ast)}$.

Posted by: Dan Christensen on August 31, 2021 1:31 AM | Permalink | Reply to this

Ah, sorry if I wasn’t clear. Of course lots of languages allow shadowed variables; that’s why I brought up the possibility in the first place. In fact, I don’t know of any languages that don’t allow you to shadow variables, at least in their “surface” syntax that uses named variables. (I don’t know of any serious languages that use anything other than named variables in surface syntax, but many of them use something like De Bruijn indices in their internal representations.) And the traditional approach to formal syntax with named variables also allows variable shadowing: $\lambda x. \lambda x. x$ is a perfectly good term, and we compute its free variables as

$fv(\lambda x. \lambda x. x) = fv(\lambda x.x) \setminus \{ x \} = (fv(x) \setminus \{x\})\setminus \{x\} = (\{x\} \setminus \{x\})\setminus \{x\} = \emptyset \setminus \{x\} = \emptyset.$

All I meant was that I haven’t seen anyone doing rigorous mathematics with intrinsically well-scoped terms using named variables that allow variable shadowing. I suppose maybe that’s so many qualifiers that the statement is essentially vacuous. I’m not sure offhand whether I’ve seen intrinsically well-scoped terms done with named variables at all.

Posted by: Mike Shulman on August 31, 2021 1:44 AM | Permalink | Reply to this

And I fixed the typo, thanks!

Posted by: Mike Shulman on August 31, 2021 1:45 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

I humbly suggest an alternative, first previewed in the 80s by Curien, Hagino, and others: Let’s remove lambda-abstractions and names altogether, and use combinators instead. Which combinators? We’ll choose the categorical combinators, which implement universal properties in arbitrary (free) categories.

Concretely, I recently implemented an esoteric programming language, Cammy. Expressions in Cammy do not bind names. For example, the following expression, which might well come directly from Hagino’s thesis on categorical programming, implements addition:

(uncurry (pr (curry snd) (curry (comp app succ))))

While this is nigh-unreadable by our typical standards, it can be fed directly into an optimizer which uses equational reasoning, with no extra nominal logic or scope-mangling. For a simpler example, the following expression implements the test for whether a natural number is even:

(pr t not)

The arrow $\mathtt{t} \colon 1 \to \Omega$ selects truth, the arrow $\mathtt{not} \colon \Omega \to \Omega$ flips between truth and falsity, and the functor $\mathtt{pr} \colon (1 \to X) \to (X \to X) \to \mathbb{N} \to X$ builds an arrow $\mathtt{(pr\;t\;not)} \colon \mathbb{N} \to \Omega$ with primitive recursion.

For more details, as well as citations to related work, please see this appendix.

Posted by: Corbin on August 31, 2021 3:21 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Sorry, is the optimizer supposed to deal somehow with the nigh-unreadability?

Posted by: Mike Shulman on August 31, 2021 3:34 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Humans can learn to read any language; languages are memetic. There are even less-readable tacit languages which are used in practice, like K. Additionally, entire subcultures of mainstream ecosystems practice pointfree techniques, like pointfree Haskell.

To make the point crisper, at the risk of only being more confusing, consider the following Scheme expression:

(define (add x y) (if (eq? x 0) y (add (- x 1) (+ y 1))))

This is longer than the respective Cammy snippet, but equivalent in intent and behavior (when restricted to the natural numbers). A Scheme-reader would consider this to be very clear (very slow and buggy) Scheme, but I don’t know whether an English-reader would consider either to be readable; readability might be relative to each reader’s personal experiences and practiced abilities.

Also, for Cammy specifically, I figured that someday I would introduce some sort of M-expression or Shutt-style F-expression. I don’t know what they would look like, though. It happens to be the case, historically, that when a low-level machine language is at all readable and writeable, there will be some folks who prefer the machine language to any sort of higher-level intent-oriented language; hopefully both Cammy and any languages which desugar to it will gracefully accept this fate.

Posted by: Corbin on August 31, 2021 4:34 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

I’ll grant that humans can learn to read almost any language – even De Bruijn indices! – but that doesn’t change the fact that some languages are significantly easier than others, both to learn and to use.

Posted by: Mike Shulman on August 31, 2021 11:44 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

The history page for the CAML language elaborates the origins of categorical combinators and their use as a low-level language/intermediate form for a compiler: https://caml.inria.fr/about/history.en.html

Posted by: Ryan Wisnesky on September 1, 2021 2:12 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Rather vague question, but by thinking structurally about variables like this, is there a way to rethink the “mismatch” between type theory and category theory, as here?

In category theory, however, pullback is not generally strictly associative, so there is a mismatch. Similarly, every type-forming operation must also be strictly preserved by substitition/pullback.

The way this is generally dealt with is to introduce a category-theoretic structure which does have a “substitution” operation which is strictly associative, hence does correspond directly to type theory, and then show that any category can be “strictified” into such a stricter structure. Unfortunately, there are many different such structures which have been used, differing only slightly.

Can’t the type theory be de-strictified? Or is it that concrete choices must be made, as you write

when actually coding things in a computer, we often do need to choose particular representations.

Posted by: David Corfield on August 31, 2021 8:07 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Various people have considered de-strictifying the type theory. It’s almost certainly possible, but most of us think it would have a severe impact on practical usability. It’s possible it would provide an alternative approach to the coherence problem, doing the translation from strict to non-strict in the world of syntax rather than making non-strict semantic things into strict ones. But I don’t know of it being seriously pursued.

I don’t think that is related to the issues of variable binding, however.

Posted by: Mike Shulman on August 31, 2021 2:18 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Ok, thanks. That makes sense.

Posted by: David Corfield on August 31, 2021 5:27 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

I doubt anyone would use this representation in practice…

This is the approach taken in Dhall. See: https://www.haskellforall.com/2021/08/namespaced-de-bruijn-indices.html

Posted by: Isaac Elliott on August 31, 2021 12:45 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

Wow! What a good thing I wrote this blog post, or maybe I never would have heard of Dhall. It’s amusing that I suggested that “namespaced De Bruijn indices” (a good name) would have the drawbacks of both De Bruijn indices and named variables, while that blog post claims they have the best of both worlds! On further consideration, I think the truth is in between: they have some of the advantages of each and some of the disadvantages of each.

• Like named variables, namespaced De Bruijn indices lead to fairly readable terms.

• Like De Bruijn indices, namespaced De Bruijn indices avoid any possibility of “variable capture” without the need to do $\alpha$-renaming during substitution.

• Like De Bruijn indices, namespaced De Bruijn indices require explicit weakening of terms when substituting them under binders. This probably won’t get used much in practice, since people tend to use different variable names for most things; but the code for it still has to be written, and if it rarely gets used then the code is more likely to be buggy.

• Like named variables, namespaced De Bruijn indices require explicit code to test $\alpha$-equivalence, rather than simply using syntactic identity the way De Bruijn indices do. (The blog post suggests testing $\alpha$-equivalence by simply converting to ordinary De Bruijn indices — it seems to me if you’re doing that, you might as well just use the ordinary De Bruijn indices internally all the way through and avoid having to do the conversion at each equality check.)

Posted by: Mike Shulman on September 1, 2021 1:27 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

One reason that makes working with “named variables” in formalized contexts (i.e. where the meta-theory is forcibly constructive) is the 3rd rule:

• For any set $V$ and element $x\notin V$, if $M\in Tm_{V\cup\{x\}}$, then $\lambda x.M\in Tm_{V}$.

The problem is $x\not\in V$. What will be the evidence/witness to this?

This doesn’t seem like too big an issue, until one tries to implement substitution properly. The $x\not\in V$ can’t be forgotten because otherwise we could easily end up with name capture. There’s a reason that proving that substitution is associative is a lot of work in pretty much all representations.

When dealing with bound variables, one indeed has a whole slew of choices. However, when dealing with presentations of theories instead, many of the same choices become unacceptable. For one, the main consumer of a presentation is a human and, for humans, names matter. A lot. We use names all the time to communicate intent. Using $+$ in a context where your operation is neither associative nor commutative will largely confuse your readers, so that your writing will fail at its core job: communication.

Nevertheless, the category of presentations does validate $\alpah$-equivalence. It doesn’t matter what you call the binary operation of a Magma, they will all be equivalent. They are just not equal.

In other words, theory presentations are both ‘intensional’ and ‘intentional’, but their semantics can indeed be extensional. Thus one needs to use explicit transport (i.e. renamings) when leveraging theorems proved in one presentation to be theorems in another. Usability then pushes implementors to try to automate this as much as possible.

Still think this is trivial? Notice how

• Lean’s mathlib has both has_add and has_mul, which leads to having both ‘additive semigroup’ and ‘semigroup’ !

• Isabelle’s standard typeclass hierarchy has the same

• The version of Monoid defined in mathcomp does the same thing.

and so on.

I consider this to be quite a horrifying hack.

Posted by: Jacques Carette on September 1, 2021 2:56 PM | Permalink | Reply to this

### Nominal sets

Nice post! Are you familiar with Gabbay and Pitts’ nominal approach to binding? I think it allows shadowing the way you described: if you have a $\lambda$-term $t$ with free variables in $V\cup\{x\}$, then $\lambda x.t$ has free variables in $V$ (actually, $V \setminus \{x\}$, to be tighter).

Nominal sets are sets endowed with an action of the group of permutations of $\mathbb{N}$. Each number represents a variable name, and the group action renames variables. We require that each element $x$ of a nominal set have a finite support $V$: if a permutation fixes all elements of $V$, then its action fixes $x$ as well. (They form a category that is equivalent to the Schanuel topos, if that sounds more familiar.)

We can define the syntax of the $\lambda$-calculus modulo $\alpha$-equivalence directly as a nominal set, by building the initial algebra of a certain functor. The definition is well-scoped, but only implicitly: you never describe the scope (= the finite support) of elements of a nominal set directly. However, you can compute what it means for a $\lambda$-abstraction to have free variables in some set $V$, and you get exactly the description above.

Posted by: Arthur Azevedo de Amorim on September 11, 2021 12:09 AM | Permalink | Reply to this

### Re: Nominal sets

I’ve been hearing about nominal sets for some time, but I’m still not really sold on them. I’m not sure exactly why. I’m happy with other topos-based definitions of syntax, e.g. in presheaves on renamings. Maybe I would be happier with it if I sat down and unwound exactly what it all means in terms of the Schanuel topos (which I’m not very familiar with either yet).

Posted by: Mike Shulman on September 11, 2021 11:43 PM | Permalink | Reply to this

### Re: Nominal sets

I think one reason I have trouble with nominal sets is that it doesn’t seem that a definition of syntax using nominal sets is one that I can go code up in my favorite programming language. Right? My understanding is that it only makes sense inside the Schanuel topos, and requires a programming language augmented with “nominal” features to make use of it.

Posted by: Mike Shulman on September 12, 2021 5:58 AM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

You are right; to use nominal binding, it is much better to have a tool that internalizes the theory. Unfortunately, such tools are under developed, but there are some options, implemented or on paper. The most well known one is the nominal package for Isabelle (https://isabelle.in.tum.de/nominal/main.html), but its development seems to have stalled around 2012. Ulrich Schöpp’s thesis “Names and Binding in Type Theory” (https://era.ed.ac.uk/handle/1842/1203) has a nominal dependent type theory, but only exists on paper, as far as I can tell.

In such a tool, it would be possible to define the syntax of the $\lambda$-calculus as follows, in pseudo-Coq syntax:

Inductive term := | Var (n : name) | App (t1 t2 : term) | Abs (t : abs term).

Here, name is the type of variables, and abs T is the type of elements of the form <n> t, where n : name, t : T, and elements are treated up to $\alpha$-equivalence.

Posted by: Arthur Azevedo de Amorim on September 22, 2021 4:08 PM | Permalink | Reply to this

### Re: You Could Have Invented De Bruijn Indices

You are right; to use nominal binding, it is much better to have a tool that internalizes the theory. Unfortunately, such tools are under developed, but there are some options, implemented or on paper. The most well known one is the nominal package for Isabelle (https://isabelle.in.tum.de/nominal/main.html), but its development seems to have stalled around 2012. Ulrich Schöpp’s thesis “Names and Binding in Type Theory” (https://era.ed.ac.uk/handle/1842/1203) has a nominal dependent type theory, but only exists on paper, as far as I can tell.

In such a tool, it would be possible to define the syntax of the $\lambda$-calculus as follows, in pseudo-Coq syntax:

Inductive term := | Var (n : name) | App (t1 t2 : term) | Abs (t : abs term).

Here, name is the type of variables, and abs T is the type of elements of the form <n> t, where n : name, t : T, and elements are treated up to $\alpha$-equivalence.

Posted by: Arthur Azevedo de Amorim on September 22, 2021 4:09 PM | Permalink | Reply to this

Post a New Comment