### Introduction to Synthetic Mathematics (part 1)

#### Posted by Mike Shulman

John is writing about “concepts of sameness” for Elaine Landry’s book *Category Theory for the Working Philosopher*, and has been posting some of his thoughts and drafts. I’m writing for the same book about homotopy type theory / univalent foundations; but since HoTT/UF will also make a guest appearance in John’s and David Corfield’s chapters, and one aspect of it (univalence) is central to Steve Awodey’s chapter, I had to decide what aspect of it to emphasize in my chapter.

My current plan is to focus on HoTT/UF as a *synthetic theory of $\infty$-groupoids*. But in order to say what that even means, I felt that I needed to start with a brief introduction about the phrase “synthetic theory”, which may not be familiar. Right now, my current draft of that “introduction” is more than half the allotted length of my chapter; so clearly it’ll need to be trimmed! But I thought I would go ahead and post some parts of it in its current form; so here goes.

In general, mathematical theories can be classified as *analytic* or *synthetic*. An *analytic* theory is one that *analyzes*, or breaks down, its objects of study, revealing them as put together out of simpler things, just as complex molecules are put together out of protons, neutrons, and electrons. For example, *analytic geometry* analyzes the plane geometry of points, lines, etc. in terms of real numbers: points are ordered pairs of real numbers, lines are sets of points, etc. Mathematically, the basic objects of an analytic theory are *defined* in terms of those of some other theory.

By contrast, a *synthetic* theory is one that *synthesizes*, or puts together, a conception of its basic objects based on their expected relationships and behavior. For example, *synthetic geometry* is more like the geometry of Euclid: points and lines are essentially undefined terms, given meaning by the axioms that specify what we can do with them (e.g. two points determine a unique line). (Although Euclid himself attempted to define “point” and “line”, modern mathematicians generally consider this a mistake, and regard Euclid’s “definitions” (like “a point is that which has no part”) as fairly meaningless.) Mathematically, a synthetic theory is a *formal system* governed by rules or axioms. Synthetic mathematics can be regarded as analogous to foundational physics, where a concept like the electromagnetic field is not “put together” out of anything simpler: it just is, and behaves in a certain way.

The distinction between analytic and synthetic dates back at least to Hilbert, who used the words “genetic” and “axiomatic” respectively. At one level, we can say that modern mathematics is characterized by a rich interplay between analytic and synthetic — although most mathematicians would speak instead of *definitions* and *examples*. For instance, a modern geometer might define “a geometry” to satisfy Euclid’s axioms, and then work synthetically with those axioms; but she would also construct examples of such “geometries” analytically, such as with ordered pairs of real numbers. This approach was pioneered by Hilbert himself, who emphasized in particular that constructing an analytic example (or *model*) proves the *consistency* of the synthetic theory.

However, at a deeper level, almost *all* of modern mathematics is analytic, because it is all analyzed into set theory. Our modern geometer would not actually state her axioms the way that Euclid did; she would instead define a geometry to be a *set* $P$ of points together with a *set* $L$ of lines and a sub*set* of $P\times L$ representing the “incidence” relation, etc. From this perspective, the only *truly* undefined term in mathematics is “set”, and the only truly synthetic theory is Zermelo–Fraenkel set theory (ZFC).

This use of set theory as the common foundation for mathematics is, of course, of 20th century vintage, and overall it has been a tremendous step forwards. Practically, it provides a common language and a powerful basic toolset for all mathematicians. Foundationally, it ensures that all of mathematics is consistent relative to set theory. (Hilbert’s dream of an *absolute* consistency proof is generally considered to have been demolished by Gödel’s incompleteness theorem.) And philosophically, it supplies a consistent ontology for mathematics, and a context in which to ask metamathematical questions.

However, ZFC is not the *only* theory that can be used in this way. While not every synthetic theory is rich enough to allow all of mathematics to be encoded in it, set theory is by no means unique in possessing such richness. One possible variation is to use a different sort of set theory like ETCS, in which the elements of a set are “featureless points” that are merely *distinguished* from each other, rather than *labeled* individually by the elaborate hierarchical membership structures of ZFC. Either sort of “set” suffices just as well for foundational purposes, and moreover each can be interpreted into the other.

However, we are now concerned with more radical possibilities. A paradigmatic example is topology. In modern “analytic topology”, a “space” is defined to be a *set* of points equipped with a collection of subsets called *open*, which describe how the points vary continuously into each other. (Most analytic topologists, being unaware of synthetic topology, would call their subject simply “topology.”) By contrast, in *synthetic topology* we postulate instead an axiomatic theory, on the same ontological level as ZFC, whose basic objects are spaces rather than sets.

Of course, by saying that the basic objects “are” spaces we do not mean that they are sets equipped with open subsets. Instead we mean that “space” is an undefined word, and the rules of the theory cause these “spaces” to *behave* more or less like we expect spaces to behave. In particular, synthetic spaces *have* open subsets (or, more accurately, open *subspaces*), but they are not *defined by* specifying a set together with a collection of open subsets.

It turns out that synthetic topology, like synthetic set theory (ZFC), is rich enough to encode all of mathematics. There is one trivial sense in which this is true: among all analytic spaces we find the subclass of *indiscrete* ones, in which the only open subsets are the empty set and the whole space. A notion of “indiscrete space” can also be defined in synthetic topology, and the collection of such spaces forms a universe of ETCS-like sets (we’ll come back to these in later installments). Thus we could use them to encode mathematics, entirely ignoring the rest of the synthetic theory of spaces. (The same could be said about the *discrete* spaces, in which *every* subset is open; but these are harder (though not impossible) to define and work with synthetically. The relation between the discrete and indiscrete spaces, and how they sit inside the synthetic theory of spaces, is central to the synthetic theory of *cohesion*, which I believe David is going to mention in his chapter about the philosophy of geometry.)

However, a less boring approach is to construct the objects of mathematics directly *as spaces*. How does this work? It turns out that the basic constructions on sets that we use to build (say) the set of real numbers have close analogues that act on spaces. Thus, in synthetic topology we can use these constructions to build the *space* of real numbers directly. If our system of synthetic topology is set up well, then the resulting space will behave like the analytic space of real numbers (the one that is defined by first constructing the mere *set* of real numbers and then equipping it with the unions of open intervals as its topology).

The next question is, why would we *want* to do mathematics this way? There are a lot of reasons, but right now I believe they can be classified into three sorts: *modularity*, *philosophy*, and *pragmatism*. (If you can think of other reasons that I’m forgetting, please mention them in the comments!)

By “modularity” I mean the same thing as does a programmer: even if we believe that spaces *are* ultimately built analytically out of sets, it is often useful to isolate their fundamental properties and work with those abstractly. One advantage of this is generality. For instance, any theorem proven in Euclid’s “neutral geometry” (i.e. without using the parallel postulate) is true not only in the model of ordered pairs of real numbers, but also in the various non-Euclidean geometries. Similarly, a theorem proven in synthetic topology may be true not only about ordinary topological spaces, but also about other variant theories such as topological sheaves, smooth spaces, etc. As always in mathematics, if we state only the assumptions we need, our theorems become more general.

Even if we only care about one model of our synthetic theory, modularity can still make our lives easier, because a synthetic theory can formally encapsulate common lemmas or styles of argument that in an analytic theory we would have to be constantly proving by hand. For example, just as every object in synthetic topology is “topological”, every “function” between them automatically preserves this topology (is “continuous”). Thus, in synthetic topology every function $\mathbb{R}\to \mathbb{R}$ is automatically continuous; all proofs of continuity have been “packaged up” into the single proof that analytic topology is a model of synthetic topology. (We can still speak about discontinuous functions too, if we want to; we just have to re-topologize $\mathbb{R}$ indiscretely first. Thus, synthetic topology reverses the situation of analytic topology: discontinuous functions are harder to talk about than continuous ones.)

By contrast to the argument from modularity, an argument from philosophy is a claim that the basic objects of mathematics *really are*, or *really should be*, those of some particular synthetic theory. Nowadays it is hard to find mathematicians who hold such opinions (except with respect to set theory), but historically we can find them taking part in the great foundational debates of the early 20th century. It is admittedly dangerous to make any precise claims in modern mathematical language about the beliefs of mathematicians 100 years ago, but I think it is justified to say that in hindsight, *one* of the points of contention in the great foundational debates was *which synthetic theory should be used as the foundation for mathematics*, or in other words *what kind of thing the basic objects of mathematics should be*. Of course, this was not visible to the participants, among other reasons because many of them used the same words (such as “set”) for the basic objects of their theories. (Another reason is that among the points at issue was the very idea that a foundation of mathematics should be built on precisely defined rules or axioms, which today most mathematicians take for granted.) But from a modern perspective, we can see that (for instance) Brouwer’s intuitionism is actually a form of synthetic topology, while Markov’s constructive recursive mathematics is a form of “synthetic computability theory”.

In these cases, the motivation for choosing such synthetic theories was clearly largely philosophical. The Russian constructivists designed their theory the way they did because they believed that everything should be computable. Similarly, Brouwer’s intuitionism can be said to be motivated by a philosophical belief that everything in mathematics should be continuous.

(I wish I could write more about the latter, because it’s really interesting. The main thing that makes Brouwerian intuitionism non-classical is *choice sequences*: infinite sequences in which each element can be “freely chosen” by a “creating subject” rather than being supplied by a rule. The concrete conclusion Brouwer drew from this is that any operation on such sequences must be calculable, at least in stages, using only finite initial segments, since we can’t ask the creating subject to make an infinite number of choices all at once. But this means exactly that any such operation must be *continuous* with respect to a suitable topology on the space of sequences. It also connects nicely with the idea of open sets as “observations” or “verifiable statements” that was mentioned in another thread. However, from the perspective of my chapter for the book, the purpose of this introduction is to lay the groundwork for discussing HoTT/UF as a synthetic theory of $\infty$-groupoids, and Brouwerian intuitionism would be a substantial digression.)

Finally, there are arguments from pragmatism. Whereas the modularist believes that the basic objects of mathematics are *actually* sets, and the philosophist believes that they are *actually* spaces (or whatever), the pragmatist says that they *could* be anything: there’s no need to commit to a single choice. Why do we do mathematics, anyway? One reason is because we find it interesting or beautiful. But all synthetic theories may be equally interesting and beautiful (at least to someone), so we may as well study them as long as we enjoy it.

Another reason we study mathematics is because it has some application *outside* of itself, e.g. to theories of the physical world. Now it may happen that all the mathematical objects that arise in some application happen to be (say) spaces. (This is arguably true of fundamental physics. Similarly, in applications to computer science, all objects that arise may happen to be computable.) In this case, why not just base our application on a synthetic theory that is good enough for the purpose, thereby gaining many of the advantages of modularity, but without caring about how or whether our theory can be modeled in set theory?

It is interesting to consider applying this perspective to other application domains. For instance, we also speak of sets outside of a purely mathematical framework, to describe collections of physical objects and mental acts of categorization; could we use spaces in the same way? Might collections of objects and thoughts automatically come with a topological structure by virtue of how they are constructed, like the real numbers do? I think this also starts to seem quite natural when we imagine topology in terms of “observations” or “verifiable statetments”. Again, saying any more about that in my chapter would be a substantial digression; but I’d be interested to hear any thoughts about it in the comments here!

## Re: Introduction to Synthetic Mathematics (part 1)

I think your argument from pragmatism (“any consistent axiomatization could be interesting”) undersells HoTT/UF. The point is that there is a large and growing community of mathematicians who find this style of argument to be personally appealing. In this sense HoTT/UF is an active mathematical sub-discipline like any other, supported by a community of researchers who think it’s cool. (Here I’m making no distinction between sub-disciplines that are traditionally regarded as more “problem-solving” than “theory-building.” Even for the problem solvers, the question of whether a particular collection of problems is interesting persists.)

I’m really taken by your claim that ZFC is the only remaining synthetic mathematical theory, at least from the perspective of the standard foundations. Of course this is very far removed from common mathematical practice, particularly in higher category theory! A “model-independent” proof involving $(\infty,1)$-categories frequently has the form “such and such is true in 1-category theory and such and such is true in homotopy theory, so by standard arguments we conclude our result.”

I might argue that my joint project with Dominic Verity, to redevelop the foundational category theory of $(\infty,n)$-categories, is “relatively synthetic.” As we explain in a forthcoming paper, all of the basic categorical definitions and proofs are interpretable relative to a particular axiomatization that we introduce. (These axioms, which are ultimately defined using ZFC, are a subset of the axioms satisfied by the fibrant objects in a model category enriched over the Joyal model structure with all objects cofibrant.) Quasi-categories, (iterated) complete Segal spaces, and weak complicial sets all fit into such a “context” from which point the development of their basic category theory is “synthetic”, by which I mean agnostic to which of these varieties of $(\infty,n)$-categories we are talking about.

Of course, a lot of category theory has exactly the same flavor. The introduction describes how several motivating examples are special cases of a new general definition, and then all the theorems are proven relative to the new axiomatization. (See e.g., Combinatorial categorical equivalences of Dold-Kan type.)