Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 20, 2021

What is the Uniform Distribution?

Posted by Tom Leinster

Today I gave the Statistics and Data Science seminar at Queen Mary University of London, at the kind invitation of Nina Otter. There I explained an idea that arose in work with Emily Roff. It’s an answer to this question:

What is the “canonical” or “uniform” probability distribution on a metric space?

You can see my slides here, and I’ll give a lightning summary of the ideas now.

Let XX be a compact metric space.

  • Step 1   The uniform probability distribution (or more formally, probability measure) on XX should be one that’s highly spread out. So, we need to be able to quantify the “spread” of a probability distribution on a metric space.

    There are many such measures of spread — a whole one-parameter family of them, in fact. They’re the diversities (D q) q +(D_q)_{q \in \mathbb{R}^+}. Or if you prefer, you can work with the entropies logD q\log D_q; it makes little difference.

  • Step 2   We now appear to have a problem. Different values of qq give different diversity measures D qD_q, so it seems to be hoping for way too much for there to be a probability measure on XX that maximizes D qD_q for all uncountably many qqs at once.

    But miraculously, there is! Call it the maximizing measure on XX.

  • Step 3   Statisticians are very familiar with the idea of a maximum entropy distribution as being somehow canonical or preferable. But it’s not what we should call the uniform measure, as it’s not scale-invariant. For example, converting our metric from centimetres to inches would change the maximizing measure, and that’s not good.

    The idea now is to take the large-scale limit. In other words, for each scale factor t>0t \gt 0, write t\mathbb{P}_t for the maximizing measure on the scaled space tXt X, and define the uniform measure on XX to be lim t t\lim_{t \to \infty} \mathbb{P}_t. This is scale-invariant.

  • Step 4   Let’s check this gives sensible results. We already know what “uniform distribution” should mean when XX is finite, or homogeneous (it should mean Haar measure), or a subset of Euclidean space (it should mean normalized Lebesgue measure). Does our general definition of uniform measure give the right thing in these cases? Yes, it does!

    There’s also a connection between uniform measures and the Jeffreys prior, an “objective” or “noninformative” prior derived from Fisher information.

You can find all this and more in the slides.

Posted at October 20, 2021 4:06 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/3360

39 Comments & 0 Trackbacks

Re: What is the Uniform Distribution?

Very nice! I’d be very interested to hear if you get any good suggested answers to your questions on the last slide, especially

What properties would you want something with that name [uniform measure] to have?

Posted by: Mark Meckes on October 20, 2021 5:27 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Thanks! I didn’t get any answers to the question you quote. But for the question about possible examples to investigate, Nina made the broad suggestion of looking at the uniform measure on spaces coming from networks. I may be slightly mangling what she said, as I’m not at all familiar with the network world, but apparently there are interesting infinite spaces there. Hopefully I’ll be able to find out about that properly sometime — or maybe someone here can help out.

Posted by: Tom Leinster on October 20, 2021 9:11 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

One thing I definitely remember from my time in the network world was people looking at steady-state distributions for diffusion processes. It would be interesting to see how those compare with “uniform measures” in the sense defined here.

Posted by: Blake Stacey on October 20, 2021 11:53 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

OK, thanks. So, is there a compact metric space here? (Maybe it’s the space on which the steady-state distribution is defined.) If so, can you explain what it is? That’s the context for our construction: given a compact metric space, we define the uniform probability measure on it.

Posted by: Tom Leinster on October 21, 2021 11:30 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

The general idea (if I can dust off my brain cells and remember) is to put a particle on a vertex of the graph and have it execute a random walk. For example, the walker might pick an edge at random out of those connected to the current vertex and traverse it. If the choice of edge is made without bias, then the probability of stepping from vertex ii to vertex jj is then A ij/k iA_{i j}/k_i, where AA is the adjacency matrix and k ik_i is the degree of vertex ii. Thus, we have a discrete-time Markov process. If the graph is connected and not bipartite, this Markov process will have a steady-state distribution with p ik ip_i \propto k_i, and the inverse of the smallest nonzero eigenvalue of the transition matrix will give the characteristic timescale for approaching that steady-state distribution.

There are, of course, variations on the theme: picking the target node with a bias, adding weights and/or directionality to the edges, etc. In general, though, the probability distribution under consideration will be defined on the set of vertices. As for regarding the graph as a metric space … probably one would use the shortest-path distance between vertices, as in calculating the magnitude of a graph.

Posted by: Blake Stacey on November 8, 2021 3:03 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Very nice ideas!

Here are two more properties that one may want from a “uniform” distribution on a metric space XX: first a “uniformity” requirement:

  • It assigns the same amount of measure to isometric (measurable or open) subsets of XX.

Also, a “positivity” requirement, which admittedly seems pretty strong:

  • It assigns nonzero measure to all nonempty open subsets, in particular, to all open balls.

Here is an example where the latter property is difficult to satisfy: imagine a space given by the union of a square and a line segment, connected at a point. Or more generally, two Riemannian manifolds of different dimensions glued together. (It would be interesting to see what your definition of uniform measure gives in this case, by the way!)

Posted by: Paolo Perrone on October 21, 2021 9:15 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Thanks! That first property you mention is interesting, and I don’t immediately see whether or not it holds for our definition of the uniform measure.

The second one definitely doesn’t, because of examples like the one you mention. The uniform measure on a compact subset XX of n\mathbb{R}^n, of nonzero Lebesgue measure, is Lebesgue measure restricted to XX and normalized to a probability measure. So if XX is something like a lollipop shape in 2\mathbb{R}^2, the uniform measure gives the stick of the lollipop measure zero. Or more formally, as we write in Remark 9.10 of our paper:

the support of the uniform measure […] need not be XX; that is, some nonempty open sets may have measure zero. Any nontrivial union of an nn-dimensional set with a lower-dimensional set gives an example.

This fits with the intuition that if you pick a point of the lollipop uniformly at random, the probability it’s on the stick should be zero (assuming, of course, an infinitely thin stick).

I know there are contexts where it’s useful to have a “reference measure” that gives nonzero measure to nonempty open sets. On the other hand, it might feel quite non-uniform if, say, some one-dimensional piece of a space was assigned higher probability than a two-dimensional piece of the space, which is what would happen if the stick of the lollipop didn’t have measure zero.

Posted by: Tom Leinster on October 21, 2021 11:00 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

It’s certainly true that if AA and AA' are isometric measurable subsets of XX and there’s a self-isometry of XX mapping AA to AA' then the uniform measure gives them the same probability. But this is much weaker than the first property you mention.

Posted by: Tom Leinster on October 21, 2021 11:38 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

On further thought, Paolo’s first requirement —

  • It assigns the same amount of measure to isometric (measurable or open) subsets of XX

— is unfulfillable, by our or any other way of defining uniform probability measure. At least, it’s unfulfillable if we take the “measurable” option. For “open”, I don’t know.

Here’s why. If XX is countably infinite, there is no probability measure on XX that satisfies Paolo’s requirement. For any two singleton subsets are isometric, so all singletons have to be assigned the same probability; but then expressing XX as a countable union of singletons and using countable additivity of probabilities gives a contradiction.

So that explains why this requirement can’t possibly be satisfied. But maybe it’s enlightening to look at an example. In Emily’s and my paper (Section 10, question 3), we mention that

{1,1/2,1/3,,0}, \{1, 1/2, 1/3, \ldots, 0 \},

metrized as a subspace of \mathbb{R}, has uniform measure δ 0\delta_0. Is it reasonable that {0}\{0\} and {1}\{1\}, despite being isometric, are assigned different probabilities? It seems to me that it is, because although they themselves are isometric, their immediate neighbourhoods are not.

And this leads naturally to the alternative form of Paolo’s question, where we only ask about open subsets.

It’s a good question!

Posted by: Tom Leinster on October 21, 2021 7:14 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Oh, this is a very good example! Nice!

Posted by: Paolo Perrone on October 22, 2021 2:52 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

OK, now I have a proof that if XX is a compact metric space and AA and BB are isometric clopen subsets of XX, then the uniform measure on XX (assuming it exists) gives the same probability to AA and BB.

Clopen sets are obviously much more special than open sets, but baby steps!

The proof goes in two stages:

  1. It’s easy enough to work out what the maximizing distribution on a coproduct of spaces is, in terms of the maximizing distribution on the individual components. Passing to the large-scale limit tells us about the uniform measure on a coproduct in terms of the uniform measures on the individual components. And from here, we get the case of the result where AA and BB are disjoint.

  2. To extend to the general case, we can use the same two-copies-of-XX trick as here (though in fact, it was in the present context that I first thought of this argument).

Although the clopenness hypothesis is extremely restrictive, there is a useful corollary: in the uniform measure on a compact metric space, all isolated points are assigned the same probability. For example, in the space {1,1/2,1/3,,0}\{1, 1/2, 1/3, \ldots, 0\} that I mentioned before, every point apart from 00 is isolated, and there are infinitely many of them, so the uniform measure can only be δ 0\delta_0.

Posted by: Tom Leinster on October 23, 2021 1:39 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Re the statement in the first paragraph: oops. The proof I thought I had is wrong. I don’t know whether the statement is correct.

Posted by: Tom Leinster on October 31, 2021 11:46 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Here’s another property that seems like it would be nice and seems similar in spirit to Paolo’s:

If μ\mu is the uniform measure on XX, and YXY \subseteq X is open and satisfies μ(Y)>0\mu(Y) \gt 0, then the uniform measure on YY is the normalization of μ\mu restricted to YY.

Tom’s example in this comment above shows that this fails without the openness assumption (let Y={0,1}Y = \{0,1\}), but with that assumption, the example satisfies this property.

Posted by: Mark Meckes on October 22, 2021 5:44 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I like this. Moreover, your property implies mine in case of disjoint subsets: let YY and ZZ be open subsets of XX of positive measure, and suppose they are isometric. Then YZY \cup Z is also an open subset of XX, and so the measure μ\mu restricts to the uniform measure on YZY\cup Z after normalization. Now the isometry YZY \to Z does induce a self-isometry of YZY\cup Z, and so μ(Y)=μ(Z)\mu(Y)=\mu(Z).

(Does anyone see a way to make this work if YY and ZZ are not disjoint?)

Posted by: Paolo Perrone on October 22, 2021 6:08 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I think we can drop disjointness (at least, if we ignore the problem mentioned in my comment just now).

For consider the coproduct X+XX + X, which is two disjoint copies of XX at distance \infty from each other. It’s not hard to show that X+XX + X has uniform measure 12μ12μ\tfrac{1}{2} \mu \oplus \tfrac{1}{2}\mu, in what I hope is obvious notation.

Let A 1A_1 denote the first copy of AA in X+XX + X, and B 2B_2 the second copy of BB. Then A 1A_1 and B 2B_2 are disjoint open subsets of X+XX + X that are isometric. So by your argument, if Mark’s principle holds then 12μ12μ\tfrac{1}{2} \mu \oplus \tfrac{1}{2}\mu gives the same probability to A 1A_1 and B 2B_2. But this just says that μ(A)/2=μ(B)/2\mu(A)/2 = \mu(B)/2. Hence μ(A)=μ(B)\mu(A) = \mu(B).

Posted by: Tom Leinster on October 23, 2021 1:26 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I like Mark’s property too, but there’s a difficulty: it talks about uniform measure on a non-compact metric space YY, whereas we’ve only defined uniform measure for (some) compact metric spaces.

To briefly recap the definition: one first defines what it means for a probability measure on a compact metric space AA to be “maximizing” (i.e. diversity-maximizing).

To define the uniform measure on AA, we assume that for all sufficiently large t>0t \gt 0, the scaled space tAt A has a unique maximizing measure μ t\mu_t, and we also assume that μ t\mu_t has a limit (in the weak *{}^\ast topology) as tt \to \infty. That limit is the uniform measure.

I confess, I don’t have a clear idea of exactly where in all this compactness is needed. It was always just a background assumption for me. Maybe it’s possible to get away without it.

Mark’s property does hold when YY is a clopen subset, for what it’s worth.

Posted by: Tom Leinster on October 23, 2021 1:21 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I haven’t thought about this carefully at all (and was being knowingly sloppy in my comment above), but maybe this can be dealt with here by requiring YY to be the closure of an open set (which, since XX itself is assumed to be compact, will be compact).

Posted by: Mark Meckes on October 23, 2021 1:56 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Yes, that sounds good. And it reminds me that being the closure of an open set (or equivalently, the closure of your interior) was also a hypothesis in Heiko and Magnus’s first paper on magnitude.

Alternatively, it occurs to me that if we were to try to extend the definition of uniform measure to some class of metric spaces including the open subspaces of compact spaces, a natural such class might be the totally bounded spaces. If I’m not mistaken, these are always locally compact.

Incidentally, I wonder about the continuity properties of uniform measure. If two compact metric spaces are nearby in the Gromov–Hausdorff metric, are their uniform measures similar? This question needs making precise, but I feel like your work on continuity of maximum diversity must be the starting point to an answer.

Posted by: Tom Leinster on October 23, 2021 10:15 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

One way someone could try to define a uniform measure on a compact metric space XX would be analogously to the Haar measure - for any closed subset KK, let [ε:K][\varepsilon : K] be the minimum number of ε\varepsilon-balls needed to cover KK, define μ(K)\mu(K) as lim ε0[ε:K]/[ε:X]\lim_{\varepsilon \to 0} [\varepsilon : K] / [\varepsilon : X], and extend μ\mu to a probability measure.

This doesn’t work. If we let CC \subseteq \mathbb{R} be the Cantor set, then X=C(2+2C)X = C \cup (2+2C) has [ε:C]/[ε:X][\varepsilon : C] / [\varepsilon : X] oscillate infinitely often between being 1/21/2 and 1/31/3.

When it does work, does this coincide with your notion of uniform measure, or can it produce something different?

Posted by: Jem Lord on October 23, 2021 3:15 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Interesting question. I don’t know.

What you describe sounds very much like Hausdorff measure, which Emily and I say a little bit about in Section 10 of our paper. I don’t know much about the relationship between uniform and Hausdorff measure, but I suspect it’s complicated, since uniform measure is closely related to Minkowski dimension rather than Hausdorff dimension.

Posted by: Tom Leinster on October 24, 2021 12:19 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Maybe we’re beginning to get somewhere on properties of the uniform measure!

Here’s a list of conjectures mentioned so far. To simplify things a bit, I’ll silently assume that every space involved has a uniform measure, and I’ll denote the uniform measure on XX by μ X\mu_X.

  • Continuity  Let XX be a compact metric space. Then the function {nonempty closed subsets of X}{probability measures on X} \{\text{nonempty closed subsets of }\ X\} \to \{\text{probability measures on }\ X\} defined by Aμ A A \mapsto \mu_A is continuous with respect to the Hausdorff metric on the domain and the weak *{}^\ast-topology on the codomain.

  • Restriction  Let XX be a compact metric space and AA a nonempty subset of XX that is the closure of an open set. Then μ X| A=μ(A)μ A, \mu_X|_A = \mu(A) \cdot \mu_A, where the left-hand side is the restriction of μ X\mu_X to AA.

    (Incidentally, is there a good one-word name for a subset of a topological space that’s a closure of an open set, or equivalently, the closure of its interior? People sometimes call such a set a “domain”, but since that has at least two other meanings, I’d prefer another name.)

  • Invariance for closures of open sets  Let XX be a compact metric space and let AA and BB be nonempty subsets of XX that are both closures of open sets. If AA and BB are isometric then μ X(A)=μ X(B)\mu_X(A) = \mu_X(B).

  • Invariance for open sets  Similarly.

Now some commentary. First, a general point: restriction and isometry-invariance do hold for clopen subsets. This follows from some easy stuff about the uniform measure on a coproduct; see this comment.

  • Continuity  Mark proved (about ten years ago now) that maximum diversity, as a function of positive definite compact metric spaces, is continuous with respect to the Gromov–Hausdorff metric. That gives grounds for optimism, but there’s still some distance between this and a proof of the continuity of uniform measure. Some factors: (i) we need to think about the maximizing measures themselves, not just the maximum diversity; (ii) we have to pass to the large-scale limit; (iii) ideally, it would be nice to drop the positive-definiteness condition.

  • Restriction  Here’s a possible strategy for deducing restriction from continuity. Let AA be a closed subset of a compact metric space XX. For small ε>0\varepsilon \gt 0, let B εB_\varepsilon be the complement of the open ε\varepsilon-neighbourhood of AA. Then AB εA \cup B_\varepsilon is Hausdorff-close to XX, since it’s just XX with a thin shell around AA removed. So if continuity holds then μ AB εμ X\mu_{A \cup B_\varepsilon} \approx \mu_X. But AA is a clopen subset of AB εA \cup B_\varepsilon, and restriction does hold for clopen sets. So we should get μ X| Aμ AB ε| A=μ AB ε(A)μ Aμ X(A)μ A, \mu_X|_A \approx \mu_{A \cup B_\varepsilon}|_A = \mu_{A \cup B_\varepsilon}(A) \cdot \mu_A \approx \mu_X(A) \cdot \mu_A, hence μ X| Aμ X(A)μ A. \mu_X|_A \approx \mu_X(A) \cdot \mu_A. And with luck, letting ε0\varepsilon \to 0 should turn that \approx into an ==.

  • Invariance for closures of open sets.  This should follow from restriction, by the argument that Paolo and I put together earlier.

  • Invariance for open sets.  I don’t currently see how this would follow from invariance for closures of open sets. We do know that if two open sets are isometric then so are their closures (since closure is completion, which is functorial). But taking the closure of an open set may increase its measure (e.g. consider {1,1/2,1/3,,0}\{1, 1/2, 1/3, \ldots, 0\} again). So I don’t see where to go with this.

It may well be that the hypotheses for these conjectures aren’t quite right. But I think there’s at least a sketch of a plan here.

Posted by: Tom Leinster on October 23, 2021 8:00 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

A thought: if the restriction conjecture is true, then it gives a way to extend the notion of uniform measure beyond the compact case. At least on a locally compact metric space XX, demanding that μ X| A=μ X(A)μ A\mu_X|_A = \mu_X(A)\mu_A for all compact closure-of-open sets AA should uniquely determine μ X\mu_X up to scaling. For X= nX = \mathbb{R}^n, Lebesgue measure is uniform in this sense by the result in the paper.

Posted by: lambda on October 23, 2021 9:41 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Hmm, thinking further about it, it’s not completely obvious that it really is unique up to scaling, because you have to rule out the scale factor itself being somehow non-uniform.

Posted by: lambda on October 24, 2021 4:53 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Here’s a slightly stronger continuity conjecture:

The function {compact metric spaces}{metric measure spaces} \{ \text{compact metric spaces} \} \to \{ \text{metric measure spaces} \} defined by (X,d)(X,d,μ X) (X, d) \mapsto (X, d, \mu_X) is continuous with respect to the Gromov–Hausdorff topology on the domain and the Gromov–Wasserstein topology on the codomain.

Posted by: Mark Meckes on October 23, 2021 11:30 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Incidentally, I’m pretty sure my proof of continuity of maximum diversity doesn’t actually need positive definiteness (though I haven’t thought about this stuff carefully in a while). That hypothesis was just thrown in everywhere since I was focused on magnitude for positive definite spaces, but all the positivity one needs is automatic when you’re only working with positive measures.

Posted by: Mark Meckes on October 23, 2021 11:36 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Ah, I’m beginning to think we might have had this conversation before. And indeed, Proposition 3.12 of our survey paper says exactly what you’re saying now:

The maximum diversity |A| +|A|_+ is continuous as a function of AA, on the class of compact metric spaces equipped with the Gromov–Hausdorff topology.

No positive definiteness!

Posted by: Tom Leinster on October 24, 2021 12:30 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

A little progress on continuity: let XX be a compact metric space and let (Y n)(Y_n) be a sequence of closed subspaces converging to XX in the Hausdorff metric. Suppose that XX has a unique maximizing measure ν X\nu_X, and similarly that each Y nY_n has a unique maximizing measure ν Y n\nu_{Y_n}. Then we can show that ν Y n\nu_{Y_n} converges to ν X\nu_X in the weak*\ast topology on probability measures on XX.

The proof starts from Mark’s result that the maximum diversity is continuous in this sense. (In fact, it’s continuous in a stronger sense, but never mind that for now.) Then we use a little lemma: if you have a compact space SS and a continuous function ϕ:S\phi: S \to \mathbb{R} that achieves its maximum at only one point s maxs_{max}, then any sequence (s n)(s_n) in SS satisfying

ϕ(s n)ϕ(s max) \phi(s_n) \to \phi(s_{max})

must in fact satisfy

s ns max. s_n \to s_{max}.

In our case, SS is the space of probability measures on XX.

That gives a proof of the result I stated, but something feels cheap and nasty. We have no control over the speed of convergence of ν Y n\nu_{Y_n} to ν X\nu_X, and that’s going to be a problem when we attempt to pass to the large-scale limit (replacing Y nY_n and XX by tY nt Y_n and tXt X for t0t \gg 0). Maybe we have to think harder about what could cause a maximizing measure to be unique.

Posted by: Tom Leinster on October 25, 2021 7:51 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I suspect you mean here to be positing that the spaces have unique uniform measures?

Of course a sufficient condition for the uniqueness of the uniform measure (assuming its existance) would be that XX has a unique maximizing measure at each scale t>0t \gt 0.

A sufficient condition for the uniqueness of the maximizing measures is contained in the proof of Proposition 8.8 of your paper with Emily: If the bilinear form μ,ν= X Xe td(x,y)dμ(x)dν(y) \langle \mu, \nu \rangle = \int_X \int_X e^{-t d(x,y)} \ d\mu(x) \ d\nu(y) on the space M(X)M(X) of signed measures on XX is strictly positive definite, then the maximizing measure at scale tt is unique.

As you know well, the latter condition holds in many interesting cases, and fails in many other interesting cases.

Posted by: Mark Meckes on October 25, 2021 9:34 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Ah, no, I was misreading your comment, and you can disregard the first two paragraphs of my comment.

The rest is worth reading, though.

Posted by: Mark Meckes on October 25, 2021 9:36 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Thanks. I feel the constant pull of positive definiteness hypotheses… though I don’t know whether even this strong one will be strong enough.

Ultimately, the hypotheses in the definition of uniform measure seem a bit provisional. Or at least, I don’t think the tyres have been thoroughly kicked. For example, is assuming that tXt X has a unique maximizing measure for all t0t \gg 0 really the right thing to do? I’m not sure.

Posted by: Tom Leinster on October 25, 2021 11:46 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

It’s conceivable that one could have a situation where there may be multiple maximizing measures for some tt, but that any sequence of maximizing measures with tt \to \infty converges to the same limit. But that’s a pretty unsatisfying hypothesis.

Posted by: Mark Meckes on October 26, 2021 3:12 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Mark wrote:

It’s conceivable that one could have a situation where there may be multiple maximizing measures for some tt, but that any sequence of maximizing measures with tt \to \infty converges to the same limit. But that’s a pretty unsatisfying hypothesis.

I feel the same way, but let me explain the picture in my head.

We have our compact metric space XX. We also have the space P(X)P(X) of probability measures on it, with the weak*\ast topology (metrized by the Wasserstein metric). This is also compact. For each t>0t \gt 0, the maximizing measures on tXt X form a nonempty closed set M tP(X)M_t \subseteq P(X) (which is convex if XX is of negative type). One can think about how M tM_t changes as tt \to \infty.

The optimistic picture in my head is of M tM_t moving and shrinking down to a single point as tt \to \infty. (Imagine a water droplet sliding down a slope while simultaneously evaporating.) To say that it shrinks down to a point is equivalent to your hypothesis.

I wonder whether there are some general results about the behaviour of M tM_t as tt grows. For example, does your hypothesis always hold? Does diam(M t)0diam(M_t) \to 0 as tt \to \infty? (Whatever diameter means.) If we write

M = T>0Cl( tTM t) M_\infty = \bigcap_{T \gt 0} Cl\biggl(\bigcup_{t \geq T} M_t\biggr)

for the set of limit points of sequences of maximizing measures as tt \to \infty, does M M_\infty always have at least one element? At most one element? Ideally it would always have exactly one, which we would then call the uniform measure.

Posted by: Tom Leinster on October 26, 2021 10:11 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

I like this picture! First observation: In a compact metric space, like P(X)P(X), the intersection of a decreasing sequence of closed nonempty subsets is nonempty. So we do indeed have M = n=1 Cl( tnM t). M_\infty = \bigcap_{n = 1}^\infty Cl \left(\bigcup_{t \ge n} M_t\right) \neq \emptyset.

So if we generalize the definition of uniform measure a bit, we could say that every compact metric space possesses some uniform measures.

Another standard compactness argument shows that the diameter of the intersection of such a sequence of sets is at least the infimum of the diameters of the sets, hence the diameter of the intersection is 0 iff the diameters shrink to 0. So the uniqueness of the uniform measure is equivalent to lim ndiam( tnM t)=0, \lim_{n \to \infty} diam\left(\bigcup_{t \ge n} M_t\right) = 0, for any metric that metrizes the weak* topology on P(X)P(X). Which is pretty close to your question about diam(M t)diam(M_t), and very closely related to the question about continuity of maximizing measures.

Posted by: Mark Meckes on October 26, 2021 5:34 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Great, I’m already happier with that definition of a uniform measure. It makes sense on an arbitrary compact metric space, and there’s always at least one of them (silly me for not seeing that).

Posted by: Tom Leinster on October 26, 2021 9:03 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

A subset AXA\subset X of a topological space XX whose interior is dense in AA is called regular. So a regular closed subset is one that is the closure of its interior.

Posted by: David Roberts on October 24, 2021 1:32 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Thanks!

Posted by: Tom Leinster on October 24, 2021 1:47 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Here’s what I think is a proof that if the restriction conjecture holds then so does the invariance-for-open-sets conjecture.

As ever, I’ll silently assume where necessary that every compact metric space has a uniform measure (necessarily unique).

Let XX be a compact metric space, and let UU and WW be isometric nonempty open subsets of XX. Let f:UWf: U \to W be an isometry. Then ff extends uniquely to an isometry f¯:U¯W¯\overline{f}: \overline{U} \to \overline{W} between the closures. Under the isometry f¯\overline{f}, the uniform measure μ U¯\mu_\overline{U} on U¯\overline{U} corresponds to the uniform measure μ W¯\mu_{\overline{W}} on W¯\overline{W}, simply because it’s an isometry. In particular, since fU=Wf U = W,

μ U¯(U)=μ W¯(W). \mu_{\overline{U}}(U) = \mu_{\overline{W}}(W).

Now we use restriction. Applied to the closures-of-opens U¯\overline{U} and W¯\overline{W}, it gives

μ X| U¯=μ X(U¯)μ U¯,μ X| W¯=μ X(W¯)μ W¯, \mu_X|_{\overline{U}} = \mu_X(\overline{U}) \cdot \mu_{\overline{U}}, \qquad \mu_X|_{\overline{W}} = \mu_X(\overline{W}) \cdot \mu_{\overline{W}},

and in particular, μ X(U)=μ X(U¯)μ U¯(U),μ X(W)=μ X(W¯)μ W¯(W). \mu_X(U) = \mu_X(\overline{U}) \cdot \mu_{\overline{U}}(U), \qquad \mu_X(W) = \mu_X(\overline{W}) \cdot \mu_{\overline{W}}(W).

Also, we’ve already seen that restriction implies invariance for closures of open sets, so

μ X(U¯)=μ X(W¯). \mu_X(\overline{U}) = \mu_X(\overline{W}).

Putting this all together gives μ X(U)=μ X(W)\mu_X(U) = \mu_X(W), as required.

So where are we on these conjectures?

Modulo existence of uniform measures, it seems that the invariance conjectures both follow from the restriction conjecture. I sketched a proof that restriction follows from the continuity conjecture (I mean the weak one that I stated, not the nice strong Gromovy one that Mark stated). If that sketch proof can be made to go through then the remaining challenge is to prove continuity.

The deduction of restriction from continuity might not be as straightforward as my sketch made it look, though. The problem is that convergence in the weak *{}^\ast topology on the space of probability measures doesn’t imply convergence of probabilities of a particular set. That is, νμ\nu \to \mu doesn’t imply ν(A)μ(A)\nu(A) \to \mu(A) for all measurable AA. This somehow has to be overcome.

Posted by: Tom Leinster on October 24, 2021 1:46 AM | Permalink | Reply to this

Re: What is the Uniform Distribution?

The problem is that convergence in the weak∗ topology on the space of probability measures doesn’t imply convergence of probabilities of a particular set. That is, νμ\nu \to \mu doesn’t imply ν(A)μ(A)\nu(A) \to \mu(A) for all measurable 𝐴𝐴. This somehow has to be overcome.

You may already know this, but the so-called Portmanteau theorem, which gives a number of equivalent conditions for convergence of probability measures, says that for probability measures on a metric space, νμ\nu \to \mu is equivalent to ν(A)μ(A)for all Borel setsAwithμ(A)=0. \nu(A) \to \mu(A)\quad \text{for all Borel sets}\ A with \mu(\partial A) = 0.

(The convergence νμ\nu \to \mu here is the probabilists’ “weak convergence” or “convergence in distribution”, which in general is not quite the same as convergence in the weak* topology, but is the same on a compact metric space, which is the setting we care about here.)

I haven’t yet given any thought to whether this observation helps in this case.

Posted by: Mark Meckes on October 28, 2021 1:47 PM | Permalink | Reply to this

Re: What is the Uniform Distribution?

Thanks. I’d understood that the difficulty had to do with the possibility of A\partial A having positive probability, but I didn’t know that theorem.

Posted by: Tom Leinster on October 28, 2021 3:13 PM | Permalink | Reply to this

Post a New Comment