### Native Type Theory (Part 3)

#### Posted by John Baez

*guest post by Christian Williams*

In Part 2 we described *higher-order algebraic theories*: categories with products and finite-order exponents, which present languages with (binding) operations, equations, and rewrites; from these we construct native type systems.

Now let’s use the wisdom of the Yoneda embedding!

Every category embeds into a topos of presheaves

$y\colon C\rightarrowtail \mathscr{P}C=[C^{op},Set]$

$y(c) = C(-,c) \quad\quad y(f)=C(-,f)\colon C(-,c)\to C(-,d).$

If $(C,\otimes,[-,-])$ is monoidal closed, then the embedding preserves this structure:

$y(c\otimes d)\simeq y(c)\otimes y(d) \quad \quad y([c,d])\simeq [y(c),y(d)]$

i.e. using Day convolution, $y$ is monoidal closed. So, we can move into a richer environment while preserving *higher-order algebraic structure*, or *languages*.

We now explore the native type system of a language, using the $\rho$-calculus as our running example. The complete type system is in the paper, page 9.

## Representables

The simplest kind of object of the native type system is a **representable** $T(-,\mathtt{S})$. This is the set of all terms of sort $\mathtt{S}$, indexed by the context of the language. Whereas many works in computer science either restrict to closed terms or lump all terms together, this indexing is natural and useful.

In the $\rho$-calculus, $y(\mathtt{P}) = T_\rho(-,\mathtt{P})$ is the indexed set of all processes.

$y(\mathtt{P})(\Gamma) = \{p \;|\; (x_1,\dots,x_n):\Gamma \vdash p:\mathtt{P}\}.$

The type system is built from these basic objects by the operations of $T$ and the structure of $\mathscr{P}T$. We can then construct predicates, dependent types, co/limits, etc., and each constructor has corresponding inference rules which can be used by a computer.

## Predicates and Types

The language of a topos is represented by two fibrations: the subobject fibration gives *predicate logic*, and the codomain fibration gives *dependent type theory*. Hence the two basic entities are predicates and (dependent) types. Types are more general, and we can think of them as the “new sorts” of language $T$, which can be much more expressive.

A predicate $\varphi:y(\mathtt{P})\to \Omega$ corresponds to a subobject of a representable $\{p \;|\; \varphi(p)\}\rightarrowtail y(\mathtt{P})$, which is equivalent to a *sieve*: a set of morphisms into $\mathtt{P}$, closed under precomposition:

This emphasizes the idea that predicate logic over representables is actually reasoning about *abstract syntax trees*: here $g$ is some tree of operations in $T$ with an $\mathtt{S}$-shaped hole of variables, and the predicate $\varphi$ only cares about the outer shape of $g$; you can plug in any term $f$ while still satisfying $\varphi$.

More generally, a morphism $f\colon B\to A$ is understood as an “indexed presheaf” or *dependent type*

$x:A\vdash B(x):Type.$

i.e. for every element $x\colon X\to A$, there is a fiber $B(x):= f^*(x)$ which is the “type depending on term $x$”.

An example of a type in the $\rho$-calculus is given by the input operation,

$y(\mathtt{in}):y(\mathtt{N\times P\times [N,P]})\to y(\mathtt{P})$

where the fiber over $\varphi$ is the set of all channel-context pairs $(n,\lambda x.p)$ such that $\varphi(\mathtt{in}(n,\lambda x.p))$.

## Dependent Sum and Product

Here we use the structure described in Part 1. The predicate functor $\mathscr{P}T(-,\Omega):\mathscr{P}T^{op}\to CHA$ is a **hyperdoctrine**, which for each presheaf $A$ gives a complete Heyting algebra of predicates $\Omega^A$, and for each $f\colon B\to A$ gives adjoints $\exists_f\dashv \Omega^f\dashv \forall_f\colon \Omega^B\to \Omega^A$ for *image*, *preimage*, and *secure image*.

Similarly, the slice functor $\mathscr{P}T/-:\mathscr{P}T^{op} \to CCT$ is a hyperdoctrine into co/complete toposes with adjoints $\Sigma_f\dashv \Delta^f\dashv \Pi_f$. These are **dependent sum**, **substitution**, and **dependent product**. From these we can reconstruct all the operations of predicate logic, and much more.

As (briefly) explained in Part 1, the idea of dependent sum is that *indexed sums generalize products*; here the codomain is the set of indices and its fibers are the sets in the family; so an element of the indexed sum is a *dependent pair* $(a,x\in X_a)$. Dually, *indexed products generalize functions*: an element of the product of the fibers is a tuple $(x_1\in X_{a_1},\dots,x_n\in X_{a_n})$ which can be understood as a *dependent function* where the codomain $X_a$ depends on which $a$ you plug in.

Explicitly, given $f\colon A\to B$ and $p\colon X\to A$, $q\colon Y\to B$, we have $\Delta_f(q)_\mathtt{S}^a = q_\mathtt{S}^{f_\mathtt{S}(a)}$ and

(letting $X_\mathtt{S}=X(\mathtt{S})$ and $p_\mathtt{S}^b$ denote the fiber over $b$). Despite the complex formulae, the intuition is essentially the same as in Set, except we need to ensure the resulting objects are still presheaves, i.e. closed under precomposition. The point is:

$\Sigma \;\; \text{generalizes product, and categorifies } \;\; \exists \;\; \text{ or image; and}$

$\Pi \;\; \text{generalizes internal hom, and categorifies } \;\; \forall \;\; \text{ or secure image.}$

The main examples start with just “pushing forward” operations in the theory, using $\exists$. Given an operation $f\colon \mathtt{S}\to \mathtt{T}$, the image $\exists_{y(f)}:\Omega^{y(\mathtt{S})}\to \Omega^{y(\mathtt{T})}$ takes a predicate (sieve) $\varphi\rightarrowtail y(\mathtt{S})$ and simply postcomposes every term in $\varphi$ with $f$.

Hence an example predicate (leaving $\exists$ and $y$ implicit) is

$\mathsf{multi.thread} = \neg(0)\vert \neg(0) \;\; \rightarrowtail y(\mathtt{P}).$

This predicate determines processes which are the parallel of two non-null processes.

As an example of the distinct utility of the adjoints, recall from Part 2 that we can model computational dynamics using a graph of processes and rewrites $s,t:\mathtt{E\to P}$. Now these operations give adjunctions between sieves on $\mathtt{E}$ and sieves on $\mathtt{P}$, which give operators for “step forward or backward”:

$\Sigma_t\Omega^s(\varphi) = \{q \;|\; \exists r.\; r:p\rightsquigarrow q \wedge \varphi(p)\}$

$\Pi_t(\Omega^s(\varphi)) = \{q \;|\; \forall r.\; r:p\rightsquigarrow q \Rightarrow \varphi(p)\}$

While “image” step-forward gives all possible next terms, the “secure” step-forward gives terms which could *only* have come from $\varphi$. For security protocols, this can be used to *filter processes by past behavior*.

## Image / Comprehension and Subtyping

Predicates and types are related by an adjunction between the fibrations.

To convert a predicate $\varphi:A\to \Omega$ to a type, apply comprehension to construct the subobject of terms $\mathrm{c}(\varphi)$ which satisfy $\varphi$. To convert a type $p:X\to A$ to a predicate, apply image factorization to construct the predicate $\mathrm{i}(p)$ for whether each fiber is inhabited.

We implicitly use the comprehension direction all the time (thinking of predicates as their subobjects); and while taking the image is more destructive, it can certainly be useful for the sake of simplification. For example, rather than thinking about the type $y(\mathtt{out}):y(\mathtt{N\times P})\to y(\mathtt{P})$, we may simply want to consider the image $\mathrm{i}(y(\mathtt{out}))$, the set of all output processes.

## Internal Hom and Reification

While the Grothendieck construction is relatively known, there is less awareness about how the *local* structure of an indexed category (complete Heyting algebras for predicates) can often be converted to a *global* structure on the total category of the corresponding fibration. The total category of the predicate functor $\Omega\mathscr{P}T$ is cartesian closed, allowing us to construct *predicate homs*.

The construction can be understood in the category of sets. Given $\varphi:2^A$ and $\psi:2^B$, we can define

$[\varphi,\psi]:[A,B]\to 2 \quad \quad [\varphi, \psi](f) = \forall a\in A.\; \varphi(a)\Rightarrow \psi(f(a)).$

Hence it constructs “contexts which ensure implications”.

For example, we can construct the “wand” of separation logic: let $T_h$ be the theory of a commutative monoid $(H,\cup,e)$, with a set of constants $\{h\}:1\to H$ adjoined as the elements of a heap. If we define

$(\varphi \multimap \psi) = \Omega^{\lambda x.x\cup-}[\varphi, \psi]$

then $h_1:(\varphi \multimap \psi)$ asserts that $h_2:\varphi\Rightarrow h_1\cup h_2:\psi$.

There is a much more expressive way of forming homs which we call reification (p7); we do not know if it has been explored, and we have yet to determine its relation to dependent product.

## co/Induction

Similarly, the fibers of $\Omega\mathscr{P}T\to \mathscr{P}T$ are co/complete, and this can be assembled into a global co/complete structure on the total category. Hence, we can use this to construct co/inductive types.

For example, given a predicate on names $\alpha$, we may construct a predicate for “liveness and safety” on $\alpha$:

$\mathsf{sole.in}(\alpha) = \mu X. \mathtt{in}(\alpha,\mathtt{N}.X)\wedge \neg\mathtt{in}(\neg\alpha,\mathtt{N.P})$

where $\mu$ denotes the initial algebra, which is constructed as a colimit. This determines whether a process inputs on $\alpha$, does not input on $\neg\alpha$, *and continues as a process which satisfies this same predicate*. This can be understood as a static type for a **firewall**.

## Applications

Once these type constructors are combined, they can express highly useful and complex ideas about code. The best part is that this type system can be generated from *any* language with product and function types, which includes large chunks of many popular programming languages.

To get a feel for more applications, check out the final section of Native Type Theory. Of course, check out the rest of the paper, and let me know what you think! Thank you for reading.

## Re: Native Type Theory (Part 3)

Sorry I haven’t been following, but is it that you don’t have object classifiers/universes that you make such a distinction between dependent types and predicates in ‘Image / Comprehension and Subtyping’?

As I’m sure you know, in HoTT, taking predicates as dependent propositions, then the same constructions are in play for predicates and dependent types, modulo the need for propositional truncation.

Perhaps then the question is what advantages does native type theory have over HoTT?