Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

March 29, 2021

Native Type Theory (Part 3)

Posted by John Baez

guest post by Christian Williams

In Part 2 we described higher-order algebraic theories: categories with products and finite-order exponents, which present languages with (binding) operations, equations, and rewrites; from these we construct native type systems.

Now let’s use the wisdom of the Yoneda embedding!

Every category embeds into a topos of presheaves

y:C𝒫C=[C op,Set]y\colon C\rightarrowtail \mathscr{P}C=[C^{op},Set]

y(c)=C(,c)y(f)=C(,f):C(,c)C(,d).y(c) = C(-,c) \quad\quad y(f)=C(-,f)\colon C(-,c)\to C(-,d).

If (C,,[,])(C,\otimes,[-,-]) is monoidal closed, then the embedding preserves this structure:

y(cd)y(c)y(d)y([c,d])[y(c),y(d)]y(c\otimes d)\simeq y(c)\otimes y(d) \quad \quad y([c,d])\simeq [y(c),y(d)]

i.e. using Day convolution, yy is monoidal closed. So, we can move into a richer environment while preserving higher-order algebraic structure, or languages.

We now explore the native type system of a language, using the ρ\rho-calculus as our running example. The complete type system is in the paper, page 9.

Representables

The simplest kind of object of the native type system is a representable T(,S)T(-,\mathtt{S}). This is the set of all terms of sort S\mathtt{S}, indexed by the context of the language. Whereas many works in computer science either restrict to closed terms or lump all terms together, this indexing is natural and useful.

In the ρ\rho-calculus, y(P)=T ρ(,P)y(\mathtt{P}) = T_\rho(-,\mathtt{P}) is the indexed set of all processes.

y(P)(Γ)={p|(x 1,,x n):Γp:P}.y(\mathtt{P})(\Gamma) = \{p \;|\; (x_1,\dots,x_n):\Gamma \vdash p:\mathtt{P}\}.

The type system is built from these basic objects by the operations of TT and the structure of 𝒫T\mathscr{P}T. We can then construct predicates, dependent types, co/limits, etc., and each constructor has corresponding inference rules which can be used by a computer.

Predicates and Types

The language of a topos is represented by two fibrations: the subobject fibration gives predicate logic, and the codomain fibration gives dependent type theory. Hence the two basic entities are predicates and (dependent) types. Types are more general, and we can think of them as the “new sorts” of language TT, which can be much more expressive.

A predicate φ:y(P)Ω\varphi:y(\mathtt{P})\to \Omega corresponds to a subobject of a representable {p|φ(p)}y(P)\{p \;|\; \varphi(p)\}\rightarrowtail y(\mathtt{P}), which is equivalent to a sieve: a set of morphisms into P\mathtt{P}, closed under precomposition:

This emphasizes the idea that predicate logic over representables is actually reasoning about abstract syntax trees: here gg is some tree of operations in TT with an S\mathtt{S}-shaped hole of variables, and the predicate φ\varphi only cares about the outer shape of gg; you can plug in any term ff while still satisfying φ\varphi.

More generally, a morphism f:BAf\colon B\to A is understood as an “indexed presheaf” or dependent type

x:AB(x):Type.x:A\vdash B(x):Type.

i.e. for every element x:XAx\colon X\to A, there is a fiber B(x):=f *(x)B(x):= f^*(x) which is the “type depending on term xx”.

An example of a type in the ρ\rho-calculus is given by the input operation,

y(in):y(N×P×[N,P])y(P)y(\mathtt{in}):y(\mathtt{N\times P\times [N,P]})\to y(\mathtt{P})

where the fiber over φ\varphi is the set of all channel-context pairs (n,λx.p)(n,\lambda x.p) such that φ(in(n,λx.p))\varphi(\mathtt{in}(n,\lambda x.p)).

Dependent Sum and Product

Here we use the structure described in Part 1. The predicate functor 𝒫T(,Ω):𝒫T opCHA\mathscr{P}T(-,\Omega):\mathscr{P}T^{op}\to CHA is a hyperdoctrine, which for each presheaf AA gives a complete Heyting algebra of predicates Ω A\Omega^A, and for each f:BAf\colon B\to A gives adjoints fΩ f f:Ω BΩ A\exists_f\dashv \Omega^f\dashv \forall_f\colon \Omega^B\to \Omega^A for image, preimage, and secure image.

Similarly, the slice functor 𝒫T/:𝒫T opCCT\mathscr{P}T/-:\mathscr{P}T^{op} \to CCT is a hyperdoctrine into co/complete toposes with adjoints Σ fΔ fΠ f\Sigma_f\dashv \Delta^f\dashv \Pi_f. These are dependent sum, substitution, and dependent product. From these we can reconstruct all the operations of predicate logic, and much more.

As (briefly) explained in Part 1, the idea of dependent sum is that indexed sums generalize products; here the codomain is the set of indices and its fibers are the sets in the family; so an element of the indexed sum is a dependent pair (a,xX a)(a,x\in X_a). Dually, indexed products generalize functions: an element of the product of the fibers is a tuple (x 1X a 1,,x nX a n)(x_1\in X_{a_1},\dots,x_n\in X_{a_n}) which can be understood as a dependent function where the codomain X aX_a depends on which aa you plug in.

Explicitly, given f:ABf\colon A\to B and p:XAp\colon X\to A, q:YBq\colon Y\to B, we have Δ f(q) S a=q S f S(a)\Delta_f(q)_\mathtt{S}^a = q_\mathtt{S}^{f_\mathtt{S}(a)} and

(1)Σ f(p) S b := a:A S f S(a)=bp S a Π f(p) S b := u:RS f R(a)=B(u)(b)p R a\begin{array}{ll} \Sigma_f(p)_\mathtt{S}^b & := \sum_{a:A_\mathtt{S}}\sum_{f_\mathtt{S}(a)=b} p_\mathtt{S}^{a} \\ \Pi_f(p)_\mathtt{S}^b & := \prod_{u:\mathtt{R}\to \mathtt{S}}\prod_{f_\mathtt{R}(a)=B(u)(b)} p_\mathtt{R}^{a} \end{array}

(letting X S=X(S)X_\mathtt{S}=X(\mathtt{S}) and p S bp_\mathtt{S}^b denote the fiber over bb). Despite the complex formulae, the intuition is essentially the same as in Set, except we need to ensure the resulting objects are still presheaves, i.e. closed under precomposition. The point is:

Σgeneralizes product, and categorifies or image; and \Sigma \;\; \text{generalizes product, and categorifies } \;\; \exists \;\; \text{ or image; and}

Πgeneralizes internal hom, and categorifies or secure image. \Pi \;\; \text{generalizes internal hom, and categorifies } \;\; \forall \;\; \text{ or secure image.}

The main examples start with just “pushing forward” operations in the theory, using \exists. Given an operation f:STf\colon \mathtt{S}\to \mathtt{T}, the image y(f):Ω y(S)Ω y(T)\exists_{y(f)}:\Omega^{y(\mathtt{S})}\to \Omega^{y(\mathtt{T})} takes a predicate (sieve) φy(S)\varphi\rightarrowtail y(\mathtt{S}) and simply postcomposes every term in φ\varphi with ff.

Hence an example predicate (leaving \exists and yy implicit) is

multi.thread=¬(0)|¬(0)y(P).\mathsf{multi.thread} = \neg(0)\vert \neg(0) \;\; \rightarrowtail y(\mathtt{P}).

This predicate determines processes which are the parallel of two non-null processes.

As an example of the distinct utility of the adjoints, recall from Part 2 that we can model computational dynamics using a graph of processes and rewrites s,t:EPs,t:\mathtt{E\to P}. Now these operations give adjunctions between sieves on E\mathtt{E} and sieves on P\mathtt{P}, which give operators for “step forward or backward”:

Σ tΩ s(φ)={q|r.r:pqφ(p)}\Sigma_t\Omega^s(\varphi) = \{q \;|\; \exists r.\; r:p\rightsquigarrow q \wedge \varphi(p)\}

Π t(Ω s(φ))={q|r.r:pqφ(p)}\Pi_t(\Omega^s(\varphi)) = \{q \;|\; \forall r.\; r:p\rightsquigarrow q \Rightarrow \varphi(p)\}

While “image” step-forward gives all possible next terms, the “secure” step-forward gives terms which could only have come from φ\varphi. For security protocols, this can be used to filter processes by past behavior.

Image / Comprehension and Subtyping

Predicates and types are related by an adjunction between the fibrations.

To convert a predicate φ:AΩ\varphi:A\to \Omega to a type, apply comprehension to construct the subobject of terms c(φ)\mathrm{c}(\varphi) which satisfy φ\varphi. To convert a type p:XAp:X\to A to a predicate, apply image factorization to construct the predicate i(p)\mathrm{i}(p) for whether each fiber is inhabited.

We implicitly use the comprehension direction all the time (thinking of predicates as their subobjects); and while taking the image is more destructive, it can certainly be useful for the sake of simplification. For example, rather than thinking about the type y(out):y(N×P)y(P)y(\mathtt{out}):y(\mathtt{N\times P})\to y(\mathtt{P}), we may simply want to consider the image i(y(out))\mathrm{i}(y(\mathtt{out})), the set of all output processes.

Internal Hom and Reification

While the Grothendieck construction is relatively known, there is less awareness about how the local structure of an indexed category (complete Heyting algebras for predicates) can often be converted to a global structure on the total category of the corresponding fibration. The total category of the predicate functor Ω𝒫T\Omega\mathscr{P}T is cartesian closed, allowing us to construct predicate homs.

The construction can be understood in the category of sets. Given φ:2 A\varphi:2^A and ψ:2 B\psi:2^B, we can define

[φ,ψ]:[A,B]2[φ,ψ](f)=aA.φ(a)ψ(f(a)).[\varphi,\psi]:[A,B]\to 2 \quad \quad [\varphi, \psi](f) = \forall a\in A.\; \varphi(a)\Rightarrow \psi(f(a)).

Hence it constructs “contexts which ensure implications”.

For example, we can construct the “wand” of separation logic: let T hT_h be the theory of a commutative monoid (H,,e)(H,\cup,e), with a set of constants {h}:1H\{h\}:1\to H adjoined as the elements of a heap. If we define

(φψ)=Ω λx.x[φ,ψ](\varphi \multimap \psi) = \Omega^{\lambda x.x\cup-}[\varphi, \psi]

then h 1:(φψ)h_1:(\varphi \multimap \psi) asserts that h 2:φh 1h 2:ψh_2:\varphi\Rightarrow h_1\cup h_2:\psi.

There is a much more expressive way of forming homs which we call reification (p7); we do not know if it has been explored, and we have yet to determine its relation to dependent product.

co/Induction

Similarly, the fibers of Ω𝒫T𝒫T\Omega\mathscr{P}T\to \mathscr{P}T are co/complete, and this can be assembled into a global co/complete structure on the total category. Hence, we can use this to construct co/inductive types.

For example, given a predicate on names α\alpha, we may construct a predicate for “liveness and safety” on α\alpha:

sole.in(α)=μX.in(α,N.X)¬in(¬α,N.P)\mathsf{sole.in}(\alpha) = \mu X. \mathtt{in}(\alpha,\mathtt{N}.X)\wedge \neg\mathtt{in}(\neg\alpha,\mathtt{N.P})

where μ\mu denotes the initial algebra, which is constructed as a colimit. This determines whether a process inputs on α\alpha, does not input on ¬α\neg\alpha, and continues as a process which satisfies this same predicate. This can be understood as a static type for a firewall.

Applications

Once these type constructors are combined, they can express highly useful and complex ideas about code. The best part is that this type system can be generated from any language with product and function types, which includes large chunks of many popular programming languages.

To get a feel for more applications, check out the final section of Native Type Theory. Of course, check out the rest of the paper, and let me know what you think! Thank you for reading.

Posted at March 29, 2021 4:57 AM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3309

35 Comments & 0 Trackbacks

Re: Native Type Theory (Part 3)

Sorry I haven’t been following, but is it that you don’t have object classifiers/universes that you make such a distinction between dependent types and predicates in ‘Image / Comprehension and Subtyping’?

As I’m sure you know, in HoTT, taking predicates as dependent propositions, then the same constructions are in play for predicates and dependent types, modulo the need for propositional truncation.

Perhaps then the question is what advantages does native type theory have over HoTT?

Posted by: David Corfield on March 29, 2021 2:37 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Hi David, thanks for your comment. Yes, my understanding is that by default the presheaf topos does not have an object classifier; though perhaps there’s a way to augment the category and construct one.

As for comparing to HoTT, I’m not sure I understand the question. It seems that HoTT is the language of a different class of structures. Native type theory is just a simple functorial construction from algebraic theories to type theories, using the internal language of the presheaf topos. There is no higher-equivalence structure; though it would be very interesting if this were implicitly available.

The motivation of NTT is mainly practical: we can model (chunks of) many widely-used programming languages as higher-order algebraic theories with rewriting, and translations between them as morphisms; then functorially produce type systems for these languages which “natively” reason about the structure and behavior of programs.

So I don’t think it’s an either/or situation; though I don’t yet know enough to give them an adequate comparison. Mike Shulman gave an enlightening sketch of a generalization in a previous post, using stacks rather than presheaves. I’m sure our paper is just the tip of an iceberg. Thanks; please let me know if you have more thoughts about this.

Posted by: Christian Williams on March 30, 2021 4:58 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

The ability to define propositions as subsingleton types is not really specific to HoTT; you can do it in any kind of dependent type theory, and it’s always true in a topos model. But I think Christian is probably right that if you’re working in a model that doesn’t have general universes but does have a subobject classifier, then you need some syntactic separation between propositions and subsingletons in order to have a type of propositions without constructing it from a type of types. And indeed, 1-toposes don’t in general have universe types, although if you assume the existence of an inaccessible cardinal then you can lift it to a universe object in any Grothendieck 1-topos (though not a univalent one).

One could also generalize NTT to use HoTT as its internal language by taking presheaves of simplicial or cubical sets rather than just presheaves of sets. In this case the type universes constructed from inaccessibles could be univalent.

Posted by: Mike Shulman on April 2, 2021 4:38 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Hi Mike, thanks for the comment. I’m very interested in your last paragraph. Are you suggesting to consider sSet-enriched theories and take enriched presheaves?

I had been thinking about this possibility, but I’ve been uncertain about the lack of literature on enriched topos theory. Though I suppose in this case it falls under (,1)(\infty,1)-topos theory, which I haven’t yet learned much about.

Any way, I am definitely interested in exploring this generalization, as well as your idea about stacks. This project has started simple but has a lot of room for expansion. Any future advice on these directions will be appreciated.

Posted by: Christian Williams on April 3, 2021 7:05 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Are you suggesting to consider sSet-enriched theories and take enriched presheaves?

You could do that, but you can also just start with an ordinary theory, considered as trivially enriched. And yes, here we’re doing (,1)(\infty,1)-topos theory via model-categorical presentations.

Posted by: Mike Shulman on April 5, 2021 5:24 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Right. So what does it cost extra to shift from the NTT method to

language to category to \infty-topos to HoTT theory?

Posted by: David Corfield on April 2, 2021 9:02 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

So what does it cost extra to shift from the NTT method to

language to category to \infty-topos to HoTT theory?

I don’t understand what you’re asking, can you clarify?

Posted by: Mike Shulman on April 5, 2021 5:25 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I’m just curious. Can we take as a rule of thumb something like: Don’t truncate unless you need to? I mean, there seem to be plenty of cases where not truncating to sets makes life easier. (One that comes to mind.)

In the case at hand, what are the costs and benefits of taking presheaves of simplicial sets compared to sets?

Posted by: David Corfield on April 5, 2021 9:14 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Kevin Buzzard is talking here about inductive types in Lean and points to the way it splits the recursor into two cases depending on the ‘sort’ used, so ‘X.recursor’ for sets and ‘X.inductor’ for propositions.

Once you’ve tasted HoTT is there anything that can bring you to return to the following worldview?

The sets/elements story goes on in the Type universe, and the theorems/proofs story goes on in the Prop universe.

Posted by: David Corfield on April 6, 2021 8:07 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

David wrote:

Once you’ve tasted HoTT is there anything that can bring you to return to the following worldview?

I think that the challenge of writing software often pushes people into doing things in ways that are less conceptually elegant, but which are easier to do quickly, or produces software that runs faster. I don’t think practical programmers are going to embrace the HoTT worldview very soon. As far as I can tell, Native Type Theory is designed to efficiently take existing code in widely used programming languages and embed it in a type theory. So it’s not competing on the basis of ‘most elegant worldview’.

Posted by: John Baez on April 6, 2021 4:50 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Once you’ve tasted HoTT is there anything that can bring you to return to the [elements vs proofs] worldview?

How about impredicativity?

If propositions are nothing but the (-1)-type special case of types, how did they become so special as to have impredicative quantification, while other h-levels do not? (See also: Topos Theory Can Make You a Predicativist)

It’s technically the case that subsingletons correspond to propositions, but this doesn’t explain much for me. In set theory, you have formulas usually playing the role of propositions; subsingletons are secondary. Formulas and sets have complementary roles.

So set theory has formulas and sets. Type theory has judgments and types. Coincidence? I think it’s easier to understand impredicativity if you do not think of this as a coincidence. Judgments are the real propositions. But doesn’t that just totally screw up HoTT?

I don’t think HoTT is more elegant; I think it’s different, new, and interesting. But it doesn’t yet seem to have an elegant explanation of impredicativity. (I know it was John Baez who (I think) implied HoTT is more elegant than set theory, not you. You only implied set theory is a perspective one would not want to return to. ^_^)

Is there a version of HoTT with an impredicative universe of each h-level? (Starting with the usual 0-type of (-1)-types.) From what I gather, this is inconsistent with the current principles of HoTT, so the version I’m asking about would have other changes.

Posted by: Matt Oliveri on April 6, 2021 11:17 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

For HoTT with an (untruncated) impredicative universe, see:

Impredicative Encodings of (Higher) Inductive Types, S. Awodey, J. Frey, S. Speight, LICS 2018, pp 76-85. https://arxiv.org/abs/1802.02820

Posted by: Steve Awodey on April 7, 2021 3:03 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I am curious why you gather that HoTT is inconsistent with an impredicative universe?

Posted by: Steve Awodey on April 7, 2021 3:12 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Steve Awodey wrote:

For HoTT with an (untruncated) impredicative universe, see

Thanks, but this shows that I wasn’t specific enough. I’m asking if there can be an n-type of all (n-1)-types, schematically (or not, if possible) for n ≥ 0. Formally, I suppose this would be a resizing axiom for each n. (Not just for h-props.) So for example, all h-sets are in the universe of all h-sets. To make this interesting, there should still be truncation to each h-level.

I am curious why you gather that HoTT is inconsistent with an impredicative universe?

Unfortunately, I don’t remember anymore what I thought this contradicts, other than maybe “sets cover” (which isn’t usually assumed anyway).

Posted by: Matt Oliveri on April 7, 2021 3:44 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

That’s not what I would call “impredicativity” - “resizing” maybe.

Posted by: Steve Awodey on April 7, 2021 11:13 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Now I think I remember: You need to restrict the formation of function types or else you get a variant of Girard’s paradox.

Posted by: Matt Oliveri on April 7, 2021 4:24 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Yes, that’s generally called a resizing axiom. I do think it can be called an “impredicativity” property in the broad philosophical sense, but it’s not generally what type theorists mean when they speak of impredicative universes.

It’s not consistent to have a universe of all h-sets, even if that universe is univalent (hence not an h-set itself). For instance, you can use this to reproduce the Girard/Burali-Forti paradox, since well-founded relations are rigid, hence the type of such is an h-set (even if large). So (1)(-1)-types are indeed special in that it’s consistent to have a universe of all of them; but that fact is arguably less important than it seems.

Posted by: Mike Shulman on April 8, 2021 6:15 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Hi, Mike.

It’s not consistent to have a universe of all h-sets, even if that universe is univalent (hence not an h-set itself). For instance, you can use this to reproduce the Girard/Burali-Forti paradox,

Right. That’s what I remembered. That’s not a deal breaker for me, since the paradox seems to require full function types.

So (1)(-1)-types are indeed special in that it’s consistent to have a universe of all of them; but that fact is arguably less important than it seems.

Well, I already linked to that post, but I don’t see anything that makes me feel better about h-props being special.

In set theory, propositions are the only h-level, other than the (formally) unrestricted case, which is sets. So it doesn’t look so odd for them to be special there.

In HoTT, it seems like all h-levels should be comprehended by a universe. The strength difference between judgments and types is partially reflected as an h-level increment, when then next dimension is promoted from proposition to set. The unrestricted (untruncated) case can stay uncomprehended. Alternatively, it makes sense if no h-level is comprehended.

I can understand if nobody agrees with these personal judgments about what systems make intuitive sense. But they are a reason for me to think HoTT is still missing something set theory got right.

Yes, that’s generally called a resizing axiom. I do think it can be called an “impredicativity” property in the broad philosophical sense, but it’s not generally what type theorists mean when they speak of impredicative universes.

That’s technically correct. But in usual HoTT + propositional resizing, the universe of h-props effectively has impredicative quantifiers, so I consider it an impredicative universe.

I’m expecting that even with the necessary restriction to function types, in the system I want, something analogous (in my opinion) would happen, and all the h-level universes would be impredicative.

It’s hard to have a deep discussion about purely hypothetical things. Anyway, it looks like you and Awodey don’t have the kind of result I have in mind.

Posted by: Matt Oliveri on April 11, 2021 2:53 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I already linked to that post, but I don’t see anything that makes me feel better about h-props being special.

That’s exactly the point of the post; that h-props shouldn’t be special. As you said,

Alternatively, it makes sense if no h-level is comprehended.

Posted by: Mike Shulman on April 16, 2021 5:05 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

That’s exactly the point of the post; that h-props shouldn’t be special.

OK, that’s what I thought. But this subthread started with me thinking people might prefer set theory for its explanation of impredicativity. How do you get impredicativity without making h-props special? My idea was to try and get a comprehensive universe for each h-level.

Steve Awodey’s system (which I haven’t really looked into) is another approach to impredicative HoTT, but I was guessing h-props were special there, too.

If there were an impredicative system in which h-props aren’t comprehended, that might also provide a nice explanation. But that sounds even weirder than my idea, unless my intuition is all wrong.

Posted by: Matt Oliveri on April 18, 2021 9:09 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

It is possible to formulate HoTT with an untruncated universe that admits impredicative quantification but does not satisfy the propositional resizing axiom, and so hprops are not special. This even has a model constructed in cubical assemblies. Is that what you are looking for?

Posted by: Mike Shulman on April 19, 2021 4:04 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Oh, right. Yes, this counts as what I literally asked for. Thanks.

Posted by: Matt Oliveri on April 20, 2021 7:33 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I mean, well-founded extensional relations are rigid.

A more category-theoretic paradox is that you can use a universe of all h-sets to define a category of all h-sets that therefore admits all limits and colimits. This implies that all covariant endofunctors of that category have initial algebras and hence fixpoints, which is a problem for the double-power-set by a Cantorian diagonalization.

Posted by: Mike Shulman on April 8, 2021 11:41 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I think this reference is relevant: Dana Scott. Relating theories of the lambda-calculus. In: To H.B. Curry: Essays on Combinatory Logic, Lambda-Calculus and Formalism (eds. Hindley and Seldin), Academic Press, 403–450, 1980.

Posted by: Steve Awodey on March 29, 2021 10:52 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Hi - sorry I’m having trouble finding a copy online. Do you have one? How does it relate to this post?

I assume the connection is regarding to what extent languages such as the lambda calculus can be represented as higher-order algebraic theories. But perhaps you mean more. Thanks for your thoughts.

Posted by: Christian Williams on March 30, 2021 5:09 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Here it is:

https://www.andrew.cmu.edu/user/awodey/dump/Scott/ScottRelating.pdf

It contains the idea of representing a simple programming language, like the lambda-calculus, as a structured category, and then embedding it by Yoneda into its category of presheaves, in order to extend it to a much richer language, such as impredicative higher-order logic.

Much work has been done along these lines in the intervening 40 years, inspired by this paper.

Posted by: Steve Awodey on March 30, 2021 4:03 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Thanks, Steve! That’s an extremely readable paper, written in a conversational tone I wish more people would adopt. It’s also very elementary in some ways: e.g. 14 pages through Scott says “What is a contravariant functor?” and then answers that question in the simplest way, including a parenthetical remark “and note the change of order!”. Shortly thereafter he uses the Yoneda embedding to do the trick Christian and you are talking about.

I’m Christian’s thesis advisor, but I don’t know enough about this stuff to do an optimal job. So, I should try to put him in touch with someone willing and able to quickly catch him up on the 40 years of work you’re alluding to. I think he and Mike Stay are doing some new things, but also some old things, and it would be good to know exactly which is which.

Posted by: John Baez on March 31, 2021 7:11 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Yes, Dana is a real master. Glad to help if I can - have Christian get in touch.

Posted by: Steve Awodey on March 31, 2021 11:45 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Thanks for your response, and the reference. This is a good paper; I can see why it inspired a lot of work.

Scott presented the idea of embedding a language or “lambda theory” into higher-order intuitionistic logic (without dependent types), which is great. It would be very helpful to know who has expounded on this idea since then.

I have found papers on presheaf models of type theory, but they are often focused on a particular theoretical aspect, and not simply the general practical aspect of using these richer type systems for everyday programming.

Regardless, all related thoughts and resources are greatly appreciated. Thanks again for your comment.

Posted by: Christian Williams on April 2, 2021 8:09 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

I like what you say after Example 42 about the interaction between behavioral types and structural types, just after an interpretation of some parts of operational semantics in type theoretic terms.

Could you explain me what you mean by that?

Do you mean, in mathematical terms, that Yoneda preserve limits and exponentials, so that behaviorial types are sent to some structural types in a meaningful way?

Is there a hope to use the native type theory associated to rho-calculus (or variants of pi-calculus) in applications (such as concurrent program verification)? Isn’t it “too big” in a combinatorial sense?

Posted by: Frédéric Paugam on April 15, 2021 10:15 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

The trick is that we start with the theory of (directed multi-)graphs and then add term and rewrite constructors to it, so we’re able to represent “behaviors” as “structure” of a particular kind. The edges encode the behavior of a term and the vertices encode the structure of a term.

For example, the algebraic signature for a monoid, in the absence of equations, is really the algebraic signature of a pointed magma:

  • a sort M
  • a function symbol e:1Me:1 \to M
  • a function symbol ():M 2M(-\;-):M^2 \to M

It’s only after we add associativity and unit laws that we get a monoid.

Suppose that instead of using equations, we use rewrites. Then we get a simple system that evolves from an arbitrarily parenthesized term to a “normal form” that is (say) left parenthesized. The unit law rewrite rule says that redundant uses of ee get removed.

To model these rewrites, we start by taking MM to be the sort of vertices and add a sort EE for edges, together with some function symbols and equations:

  • sorts M, E
  • function symbols s,t:EMs, t:E \to M
  • function symbol i:MEi: M \to E
  • equations si=ti=Ms \circ i = t \circ i = M

Next, we add generating edges for the associativity and unit laws:

  • function symbol a:M 3Ea:M^3 \to E
  • function symbol l,r:MEl, r:M \to E

We add equations saying what the source and target of these generating edges are:

  • sa(x,y,z)=(x(yz))sa(x,y,z) = (x\;(y\;z))
  • ta(x,y,z)=((xy)z)ta(x,y,z) = ((x\;y)\;z)
  • sl(x)=(ex)sl(x) = (e\;x)
  • tl(x)=xtl(x) = x
  • sr(x)=(xe)sr(x) = (x\;e)
  • tr(x)=xtr(x) = x

Finally, we say that rewriting can happen under a multiplication:

  • s(xy)=(sxsy)s(x\;y) = (sx\;sy)
  • t(xy)=(txty)t(x\;y) = (tx\;ty)

As it stands, the terms of the theory consist of arbitrary parenthesizations of strings of ees, and all of them reduce to ee. To make things more interesting, we add a set of generating function symbols x i:1Mx_i:1 \to M.

By taking the category of presheaves on this theory, subfunctors of the representable on the sort VV are structural types, and subfunctors of the representable on EE are (single-step) behavioral types.

A nontrivial structural type is functor that when evaluated at the terminal object in the theory gives those terms not involving multiplication, i.e. ee and the various x ix_i. A nontrivial behavioral type is functor that when evaluated at the terminal object in the theory gives those edges that do not make use of aa, i.e. the canonical self-edges and the unitor edges. Both kinds of type are expressible using the set-builder-like internal language of the topos.

As far as I know, RChain is planning on using something like Native Type Theory for doing type checking in Rholang. They may want to build it from scratch, or they may want to make it from something like K-Framework and Coq instead, but it’s on their roadmap.

Posted by: Mike Stay on April 16, 2021 1:39 AM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Thank you for these nice explanations! It is also interesting to know that these methods may be applied to concrete type checking problems.

Posted by: Frédéric Paugam on April 16, 2021 5:37 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Oops! I missed a few important function symbols.

The only notion of multiplication I mentioned was ():M×MM.(-\; -):M \times M \to M. However, for an equation like s(xy)=(sxsy)s(x\; y) = (sx\; sy) to make sense, we have to extend multiplication to use edges:

  • ():E×ME(-\; -):E \times M \to E is for when a rewrite happens in the first slot,
  • ():M×EE(-\; -):M \times E \to E is for when a rewrite happens in the second slot, and
  • ():E×EE(-\; -):E \times E \to E is for when rewrites happen in both slots simultaneously.

These need some equations to make sure they are consistent with each other, but you can derive all of these from the idea that if you see a vertex, you can promote it to its canonical self-edge and use the equation above.

Posted by: Mike Stay on April 17, 2021 12:47 AM | Permalink | Reply to this

Higher Yoneda and higher naive type theory

I am also wondering if the higher Yoneda lemma was already used (at least not by Dana Scott ;-) to make a nicely behaved (higher) directed type theory from a given language, by following the path to naive type theories you propose in your paper, but formulating it in a higher categorical setting.

This would apply to the 2-categorical approach to pi-calculus proposed here:

and to other situations (e.g. lambdalambda-calculus with rewriting) where rewritings are considered as 22-morphisms in a 22-category.

By this i mean that (in my viewpoint) the most important non-trivial example of a 11-topos is the category of sets, and the most important non-trivial example of a higher directed nn-topos (even if this notion is not yet completely clearly defined) should be the nn-category of (n1)(n-1)-categories (so for example properties of ω\omega-directed topoi should be “modeled” on properties of the ω\omega-category of ω\omega-categories).

If we have a given nn-category (possibly with additional e.g. monoidal structure; consider for example the one related to pi-calculus in the above paper), we may embed it in an nn-directed topos (with all kinds of nice properties for what one could call a theory of directed nn-types).

By higher Yoneda, i mean that if nn is either finite or infinite, and CC is a (weak) nn-category in any reasonable sense, we may define a presheaf on CC as a functor F:C op(n1)Cat F:C^{op}\to (n-1)Cat to the nn-category of (n1)(n-1)-categories (for usual categories, (n1)Cat=Set(n-1)Cat=Set) and the natural functor CFun(C op,(n1)Cat) C\to Fun(C^{op},(n-1)Cat) should give an embedding of our given nn-category into an nn-category with much better properties (its objects may be thought as directed nn-types). This category is what one could call a presheaf nn-directed topos.

This should probably be homotopically enhanced (since we need to make sense of functoriality here in a weak sense, and also probably of some universal localizations for the categories in play) by using (,n)(\infty,n) and (,ω)(\infty,\omega)-categories instead.

This is a hint, i think, of the fact that the notion of (,n)(\infty,n)-directed higher topos for n{ω}n\in\mathbb{N}\cup \{\omega\} (not to be confused with =(,1)\infty=(\infty,1)-topoi in the Grothendieck-Lurie or -not yet eluded- elementary sense) is interesting and should be explored further, with in mind the main example, given by presheaf higher directed topoi.

In type theoretic terms, if you start from your 22-categorical formulation of pi-calculus (that is an operational theory, if i understand well), you should embed it into a type theory that is not a usual dependent type theory of a topos, but a directed depend type theory of a (,2)(\infty,2)-directed topos of presheaves on your 22-category. You may then use the intuitions your have on the 22-category CatCat and extend them formally to the (inty,2)(\inty,2)-topos of presheaves, that should have similar higher categorical properties.

Posted by: Frédéric Paugam on April 15, 2021 2:04 PM | Permalink | Reply to this

Re: Native Type Theory (Part 3)

Just for the reader’s convenience, I added an entry to the nLab:

whose content is evident to specialists, but maybe not to everyone.

This shows that having a way to give a true syntax for talking about higher category theory in a purely syntactic way (i.e., an nn-directed higher type theory for every nn) is an interesting problem, even if it is not yet completely solved for the 11-categorical setting, and that an important model for extracting the properties of such a categorical type theory (i.e., the notion of elementary directed nn-topos) can be taken to be the (,n)(\infty,n)-category of (,(n1))(\infty,(n-1))-categories.

Posted by: Frédéric Paugam on April 15, 2021 3:12 PM | Permalink | Reply to this

Post a New Comment