The Propositional Fracture Theorem
Posted by Mike Shulman
Suppose is a topological space and is an open subset, with closed complement . Then and are, of course, topological spaces in their own right, and we have as a set. What additional information beyond the topologies of and is necessary to enable us to recover the topology of on their disjoint union?
Recall that the subspace topologies of and say that for each open , the intersections and are open in and , respectively. Thus, if a subset of is to be open, it must yield open subsets of and when intersected with them. However, this condition is not in general sufficient for a subset of to be open — it does define a topology on , but it’s the coproduct topology, which may not be the original one.
One way we could start is by asking what sort of structure relating and we can deduce from the fact that both are embedded in . For instance, suppose is open. Then there is some open such that . But we could also consider , and ask whether this defines something interesting as a function of .
Of course, it’s not clear that is a function of at all, since it depends on our choice of such that . Is there a canonical choice of such ? Well, yes, there’s one obvious canonical choice: since is open in , is also open as a subset of , and we have . However, , so choosing wouldn’t be very interesting.
The choice is the smallest possible such that . But there’s also a largest such , namely the union of all such . This set is open in , of course, since open sets are closed under arbitrary unions, and since intersections distribute over arbitrary unions, its intersection with is still .
Let’s call this set . In fact, it’s part of a triple of adjoint functors between the posets and of open sets in and , where is defined by , and is defined by . Here denotes the continuous inclusion .
Now we can consider the intersection , which I’ll also denote , where is the inclusion. It turns out that this is interesting! Consider the following example, which is easy to visualize:
- .
- , the open left half-plane.
- , the closed right half-plane.
If an open subset “doesn’t approach the boundary” between and , such as the open disc of radius centered at , then it’s fairly easy to see that , and therefore is the open right half-plane.
On the other hand, consider some open subset which does approach the boundary, such as the intersection with of the open disc of radius centered at . A little thought should convince you that in this case, is the union of the open right half-plane with the whole open disc of radius centered at . Therefore, is the open right half-plane together with the strip .
This example suggests that in general, measures how much of the “boundary” between and is “adjacent” to . I leave it to some enterprising reader to try to make that precise. Here’s another nice exercise: what can you say about for an open subset ?
Let us however go back to our original question of recovering the topology of . Suppose and are open such that is open in ; how does this latter fact manifest as a property of and ? Note first that . Thus, since is the largest such that , we have , and therefore . Let me say that again: This is a relationship between and which is expressed purely in terms of the topological spaces and and the function , which we have just shown is necessary for to be open in .
In fact, it is also sufficient! For suppose this to be true. Since is open in , there is some open such that . Given such a , the union also has this property, since . Note that in fact , and also , the largest open subset of whose intersection with is . (Since , unlike , is not open, there may not be a smallest such, but there is always a largest such.) Now I claim we have To show this, it suffices to show that the two sides become equal after intersecting with and with . For the first, we have and for the second we have using the assumption at the step .
In conclusion, the topology of is entirely determined by
- the induced topology of an open subspace ,
- the induced topology on its closed complement , and
- the induced function .
Specifically, the open subsets of are those of the form — or equivalently, by the above argument, — where is open in , is open in , and .
An obvious question to ask now is, suppose given two arbitrary topological spaces and and a function ; what conditions on ensure that we can define a topology on in this way, which restricts to the given topologies on and and induces as ? We may start by asking what properties has. Well, it preserves inclusion of open sets (i.e. ) and also finite intersections (), including the empty intersection (). In other words, it is a finite-limit-preserving functor between posets. Perhaps surprisingly, it turns out that this is also sufficient: any finite-limit-preserving allows us to glue and in this way; I’ll leave that as an exercise too.
Okay, that was some fun point-set topology. Now let’s categorify it. Open subsets of are the same as 0-sheaves on it, i.e. sheaves of truth values, or of subsingleton sets, and the poset is the (0,1)-topos of 0-sheaves on . So a certain sort of person immediately asks, what about -sheaves for ?
In other words, suppose we have , , and as above; what additional data on the toposes and of sheaves (of sets, or groupoids, or homotopy types, etc.) allows us to recover the topos ? As in the posetal case, we have adjunctions and relating these toposes, and we may consider the composite .
The corresponding theorem is then that is equivalent to the comma category of over , i.e. the category of triples where , , and . This is true for 1-sheaves, -sheaves, -sheaves, etc. Moreover, the condition on a functor ensuring that its comma category is a topos is again precisely that it preserves finite limits. Finally, this all works for arbitrary toposes, not just sheaves on topological spaces. I mentioned in my last post some applications of gluing for non-sheaf toposes (namely, syntactic categories).
One new-looking thing does happen at dimension 1, though, relating to what exactly the equivalence looks like. The left-to-right direction is easy: we send to where is applied to the unit of the adjunction . But in the other direction, suppose given ; how can we reconstruct an object of ?
In the case of open subsets, we obtained the corresponding object (an open subset of ) as , but now we no longer have an ambient “set of points” in which to take such a union. However, we also had the equivalent characterization of the open subset of as , and in the categorified case we do have objects and of . We might initially try their cartesian product, but this is obviously wrong because it doesn’t incorporate the additional datum . It turns out that the right generalization is actually the pullback of and the unit of the adjunction at : In particular, any object can be recovered from and by this pullback:
Now let’s shift perspective a bit, and ask what all this looks like in the internal language of the topos . Inside , the subtoposes and are visible through the left-exact idempotent monads and , whose corresponding reflective subcategories are equivalent to and respectively. In the internal type theory of , and are modalities, which I will denote and respectively. Thus, inside we can talk about “sheaves on ” and “sheaves on ” by talking about -modal and -modal types (or sets).
Moreover, these particular modalities are actually definable in the internal language of . Open subsets can be identified with subterminal objects of , a.k.a. h-propositions or “truth values” in the internal logic. Thus, is such a proposition. Now is definable in terms of by I’m using type-theorists’ notation here, so is the exponential in . The other modality is also definable internally, though a bit less simply: it’s the following pushout: In homotopy-theoretic language, is the join of and , written . And if we identify and with their images under and , then the functor is just the modality applied to -modal types.
Finally, the fact that is the gluing of with means internally that any type can be recovered from , , and the induced map as a pullback: Now recall that internally, is a proposition: something which might be true or false. Logically, has a clear meaning: its elements are ways to construct an element of under the assumption that is true.
The logical meaning of is somewhat murkier, but there is one case in which it is crystal clear. Suppose is decidable, i.e. that it is true internally that “ or not ”. If the law of excluded middle holds, then all propositions are decidable — but of course, internally to a topos, the LEM may fail to hold in general. If is decidable, then we have , where is its internal complement. It’s a nice exercise to show that under this assumption we have .
In other words, if is decidable, then the elements of are ways to construct an element of under the assumption that is false. In the decidable case, we also have , so that — and this is just the usual way to construct an element of by case analysis, doing one thing if is true and another if it is false.
This suggests that we might regard internal gluing as a “generalized sort of case analysis” which applies even to non-decidable propositions. Instead of ordinary case analysis, where we have to do two things:
- assuming , construct an element of ; and
- assuming not , construct an element of
in the non-decidable case we have to do three things:
- assuming , construct an element of ;
- construct an element of the join ; and
- check that the two constructions agree in .
I have no idea whether this sort of generalized case analysis is useful for anything. I kind of suspect it isn’t, since otherwise people would have discovered it, and be using it, and I would have heard about it. But you never know, maybe it has some application. In any case, I find it a neat way to think about gluing.
Let me end with a tantalizing remark (at least, tantalizing to me). People who calculate things in algebraic topology like to work by “localizing” or “completing” their topological spaces at primes, since it makes lots of things simpler. Then they have to try to put this “prime-by-prime” information back together into information about the original space. One important class of tools for this “putting back together” is called fracture theorems. A simple fracture theorem says that if is a -local space (meaning that all primes other than are inverted) and some technical conditions hold, then there is a pullback square: where denotes -completion and denotes “rationalization” (inverting all primes). A similar theorem applies to any space (with technical conditions), yielding a pullback square where denotes localization at .
Clearly, there is a formal resemblance to the pullback square involved in the gluing theorem. At this point I feel like I should be saying something about . Unfortunately, I don’t know what to say! Maybe some passing expert will enlighten us.
Re: The Propositional Fracture Theorem
I have been wondering about this exact thing recently, though not in quite so much detail! This post gave me some new ideas that I will have to sleep on, but hopefully someone can come by and enlighten us both.