Entropy, Diversity and Cardinality (Part 2)
Posted by David Corfield
Guest post by Tom Leinster
What happened last time?
Ecologically, I explained some ways to measure the diversity of an ecosystem — but only taking into account abundance (how many of each species there are), not similarity (how much the various species have in common). Mathematically, I explained how to assign a number to a finite probability space, measuring how uniform the distribution is.
What’s happening now?
Ecologically, we’ll see how to measure diversity in a way that takes into account both abundance and similarity. So, for instance, an ecosystem consisting of three species of eel in equal numbers will count as less diverse than one consisting of equal numbers of eels, eagles and earwigs. Mathematically, we’ll see how to assign a number to any finite ‘probability metric space’ (a set equipped with both a probability distribution and a metric).
By thinking about diversity, we’ll be led to the concept of the cardinality of a metric space. (Some ecologists got there years ago.) And by building on progress made at this blog, we’ll solve some open problems posed in the ecology literature.
Previously
In the first post, we modelled an ecosystem as a ‘finite probability space’ — that is, a finite sequence $(p_1, \ldots, p_n)$ of non-negative reals summing to $1$, with $p_i$ representing the abundance of the $i$th species. The post was all about ways of measuring such systems. We found three infinite families of measures, each indexed by a real parameter $\alpha \geq 0$; true to the title, they were the ‘$\alpha$-entropy’, ‘$\alpha$-diversity’ and ‘$\alpha$-cardinality’.
In a sense it doesn’t matter which family you use, since each determines the others. For instance, if you tell me what the $3.7$-entropy of your ecosystem is, I can tell you what its $3.7$-diversity and $3.7$-cardinality are. But sometimes the context makes one more convenient than the others.
I’ll quickly summarize the definitions. There were lots of formulas last time, but in fact everything follows from this one alone: for each $\alpha \geq 0$, the ‘surprise function’ $\sigma_\alpha: [0, 1] \to [0, \infty]$ is defined by $\sigma_\alpha(p) = \frac{1}{\alpha - 1} (1 - p^{\alpha - 1}).$ Here I’m assuming that $\alpha \neq 1$. I’ll come back to $\alpha = 1$ later.
The $\alpha$-diversity of a finite probability space $A = (p_1, \ldots, p_n)$ is the expected surprise, $D_\alpha(A) = \sum_{i = 1}^n p_i \sigma_\alpha(p_i).$ It takes its minimum value, $0$, when the distribution is concentrated at one point (that is, some $p_i$ is $1$), and its maximum value, $\sigma_\alpha(1/n)$, when the distribution is uniform ($p_1 = \cdots = p_n = 1/n$).
The $\alpha$-cardinality is defined in terms of the $\alpha$-diversity in such a way that its minimum value is $1$ (‘essentially just one species’) and its maximum is $n$ (‘all species fully represented’). This pretty much forces us to define the $\alpha$-cardinality of $A$ by $|A|_\alpha = 1/\sigma_\alpha^{-1} (D_\alpha(A)).$
Finally, the $\alpha$-entropy is just the logarithm of the $\alpha$-cardinality: $H_\alpha(A) = \log |A|_\alpha.$
To handle $\alpha = 1$, take limits: $\sigma_1(p) = \lim_{\alpha \to 1} \sigma_\alpha (p)$, $D_1(A) = \lim_{\alpha \to 1} D_\alpha (A)$, etc. With a bit of care, you can handle $\alpha = \infty$ similarly.
I’ve said all that in as formula-free a way as I can. But from what I’ve said, you can easily derive explicit formulas for diversity, cardinality and entropy, and work out the especially interesting cases $\alpha = 0, 1, 2, \infty$. For ease of reference, here it all is. $\begin{matrix} \\ & & \alpha-diversity, D_\alpha & & \alpha-cardinality, | |_\alpha & & \alpha-entropy, H_\alpha \\ \\ General \alpha \neq 1, \infty && \frac{1}{\alpha - 1} \left( 1 - \sum p_i^\alpha \right) && \left( \sum p_i^\alpha \right)^{1/(1 - \alpha)} && \frac{1}{1 - \alpha} \log \sum p_i^\alpha \\ \\ \alpha = 0 && n - 1 && n && \log n \\ \\ \alpha = 1 && -\sum p_i \log p_i && \prod p_i^{-p_i} && -\sum p_i \log p_i \\ && (Shannon entropy)&& (cardinality)&& (Shannon entropy)\\ \\ \alpha = 2 && 1 - \sum p_i^2 && 1/\sum p_i^2 && - \log \sum p_i^2 \\ && (Simpson diversity)&& (reciprocal Simpson)&& \\ \\ \alpha = \infty && 0 && 1/\max p_i && -\log \max p_i \\ && && (reciprocal Berger–Parker)&& \\ \\ Minimum value && 0 && 1 && 0 \\ Maximum value && \sigma_\alpha(1/n) && n && \log n \end{matrix}$
Let me draw your attention to two important properties of cardinality:
- Multiplicativity $|A \times B|_\alpha = |A|_\alpha \times |B|_\alpha$ for all $\alpha \geq 0$ and probability spaces $A$ and $B$.
- Invariant max and min For most probability spaces $A$, the cardinality $|A|_\alpha$ changes with $\alpha$. Let’s call $A$ invariant if, on the contrary, $|A|_\alpha$ is independent of $\alpha \in (0, \infty)$.
Fix a cardinality $n \geq 1$ for our underlying set. Then for all $\alpha$, the maximum value of $|A|_\alpha$ is $n$, and for all $\alpha$, this is attained at the uniform distribution $A = (1/n, \ldots, 1/n)$. In particular, the uniform distribution is invariant. Similarly, for all $\alpha$, the minimum value of $|A|_\alpha$ is $1$, and for all $\alpha$, this is attained at any $A$ of the form $(0, \ldots, 0, 1, 0, \ldots, 0)$. In particular, $(0, \ldots, 0, 1, 0, \ldots, 0)$ is invariant.
In fact, the invariant probability spaces can be neatly classified:
Theorem Let $n \geq 1$. Let $I$ be a nonempty subset of $\{1, \ldots, n\}$, and define $(p_1, \ldots, p_n)$ by $p_i = \left\{ \begin{matrix} 1/|I| &if i \in I \\ 0 &otherwise. \end{matrix} \right.$ Then $(p_1, \ldots, p_n)$ is invariant. Moreover, every invariant probability space arises in this way.
The two cases just mentioned are where $I$ has $1$ or $n$ elements.
The cardinality of metric spaces: a lightning introduction
We’ll use the concept of the ‘cardinality’ of a finite metric space (which is not the same thing as the cardinality of its set of points!) Here I’ll tell you everything you need to know about it in order to follow this post. If you want more, try these slides; skip the first few if you don’t know any category theory. If you want more still, try the discussion here at the Café.
Here goes. Let $A$ be a finite metric space with points $a_1, \ldots, a_n$; write $d_{i j}$ for the distance from $a_i$ to $a_j$. (We allow $d_{i j} = \infty$.) Let $Z$ be the $n \times n$ matrix with $Z_{i j} = e^{-d_{i j}}$. A weighting on $A$ is a column vector $\mathbf{w}$ such that $Z \mathbf{w} = \left( \begin{matrix} 1 \\ \vdots \\ 1 \end{matrix} \right).$ The cardinality of the metric space $A$, written $|A|_{MS}$, is the sum $\sum_{i = 1}^n w_i$ of the entries of $\mathbf{w}$. This is independent of the choice of weighting $\mathbf{w}$, so cardinality is defined as long as at least one weighting exists.
Examples If $A$ is empty then $|A|_{MS} = 0$. If $A$ has just one point then $|A|_{MS} = 1$. If $A$ has two points, distance $d_{1 2} = d$ apart, then $|A|_{MS} = 1 + \tanh(d).$ So when $d = 0$ we have $|A|_{MS} = 1$; think of this as saying ‘there is effectively only one point’. As the points move further apart, $|A|_{MS}$ increases, until when $d = \infty$ we have $|A|_{MS} = 2$; think of this as saying ‘there are two completely distinguished points’. More generally, if $A$ is an $n$-point space with $d_{i j} = \infty$ whenever $i \neq j$, then $|A|_{MS} = n$. You can think of cardinality as the ‘effective number of points’.
Usually $Z$ is invertible. In that case there is a unique weighting, and $|A|_{MS}$ is the sum of all $n^2$ entries of $Z^{-1}$. There are, however, examples of finite metric spaces for which no weighting exists; then the cardinality is undefined.
Notes for the learned 1. Previously I’ve used $e^{-2d_{i j}}$ rather than $e^{-d_{i j}}$. The difference amounts to rescaling the metric by a factor of $2$, so is fairly harmless. In the present context the reason for inserting the $2$ seems not to be very relevant, so I’ve dropped it.
2. I’m writing $|A|_{MS}$, rather than the customary $|A|$, to emphasize that it’s the cardinality of $A$ as a metric space. It’s not, in general, the same as the cardinality of the underlying set, or the cardinality (Euler characteristic) of the underlying topological space. (Moral: forgetful functors don’t preserve cardinality.) This will be important later.
3. Lawvere has argued that the symmetry axiom should be dropped from the definition of metric space. I’m assuming throughout this post that our metric spaces are symmetric, but I think everything can be done without the symmetry axiom.
Outline
We now refine our model of an ecosystem by taking similarity between species into account. A (finite) probability metric space is a finite probability space $(p_1, \ldots, p_n)$ together with a metric $d = (d_{i j})$ on $\{1, \ldots, n\}$. I’ll often denote a probability metric space by $A = (p_1, \ldots, p_n; d)$.
The distances $d_{i j}$ are to be interpreted as a measure of the dissimilarity between species. There’s a biological decision to be made as to how exactly to quantify it. It could be percentage genetic difference, or difference in toe length, or how far up the evolutionary tree you have to go before you find a common ancestor, ….. Anyway, I’ll assume that the decision has been made.
Note that in the definition of probability metric space, there’s no axiom saying that the probability and metric structure have to be compatible in any way. This reflects ecological reality: just because your community contains three-toed sloths, there’s no guarantee that you’ll also find two-toed sloths.
Example In 1875, Alfred Russel Wallace described a journey through a tropical forest:
If the traveller notices a particular species and wishes to find more like it, he may often turn his eyes in vain in every direction. Trees of varied forms, dimensions, and colours are around him, but he rarely sees any one of them repeated. Time after time he goes towards a tree which looks like the one he seeks, but a closer examination proves it to be distinct. He may at length, perhaps, meet with a second specimen half a mile off, or may fail altogether, till on another occasion he stumbles on one by accident.
(Tropical Nature and Other Essays, quoted in the paper of Patil and Taillie cited in Part 1.) Wallace’s description tells us three things about the ecosystem: it contains many species, most of them are rare, and there are many close resemblances between them. In our terminology, $n$ is large, most $p_i$s are small, and many $d_{i j}$s are small.
An important player in this drama is the quantity $\sum_{j = 1}^n e^{-d_{i j}} p_j$ associated to each point $i$ of a probability metric space. You can interpret it in a couple of ways:
- ‘How much stuff there is near $i$’. Think of the probability metric space as a metric space with a certain amount $p_i$ of ‘stuff’ piled up at each point $i$. (It’s like our ClustrMap.) Then think of $\sum_j e^{-d_{i j}} p_j$ as a sum of $p_j$s weighted by the $e^{-d_{i j}}$s. The point $i$ itself contributes the full measure $p_i$ of its stuff; a nearby point $j$ contributes most of its measure $p_j$; a far-away point contributes almost nothing.
- ‘Mean closeness to $i$’. While $d_{i j}$ measures distance, $e^{-d_{i j}}$ measures closeness, on a scale of $0$ to $1$: for points close together it’s nearly $1$, and for points far apart it’s close to $0$. So $\sum_j e^{-d_{i j}} p_j$ is the mean closeness to point $i$.
When we’re thinking ecologically, we might say similarity instead of closeness, so that identical species have similarity $1$ and utterly different species have similarity $0$. Then $\sum_j e^{-d_{i j}} p_j$ is the expected similarity between an individual of species $i$ and an individual chosen at random. So even if species $i$ itself is rare, a great abundance of species similar to $i$ will ensure that $\sum_j e^{-d_{i j}} p_j$ is high.
I’ll call $\sum_j e^{-d_{i j}} p_j$ the density at $i$. The second interpretation makes it clear that density is always in $[0, 1]$.
The metric aspect of a probability metric space is encapsulated in the $n \times n$ matrix $Z = (e^{-d_{i j}})_{i, j}$ mentioned earlier. (You could call $Z$ the similarity matrix.) The probabilistic aspect is encapsulated in the $n$-dimensional column vector $\mathbf{p} = (p_1, \ldots, p_n)$. Their product $Z\mathbf{p}$ encapsulates density: it is the column vector whose $i$th entry is the density at $i$.
In what sense are we generalizing Part 1? Well, we’ve replaced sets by metric spaces, so we need to know how metric spaces generalize sets. This works as follows: given a set $A$, the discrete metric space on $A$ is the metric space in which the points are the points of $A$ and the distance between any two distinct points is $\infty$. This construction is left adjoint to the forgetful functor from (metric spaces and distance-decreasing maps) to sets.
If $A$ is discrete then the similarity matrix $Z$ is the identity, so $Z\mathbf{p} = \mathbf{p}$, i.e. $\sum_j e^{-d_{i j}} p_j = p_i$ for all $i$. Ecologically, discreteness means that distinct species are regarded as completely different: we’re incapable of seeing the resemblance between butterflies and moths.
The plan We’re going to define, for each $\alpha \geq 0$, the $\alpha$-diversity, $\alpha$-cardinality and $\alpha$-entropy of any probability metric space. We’ll do this by taking the definitions for probability spaces without metrics (Part 1) and judiciously replacing certain occurrences of ‘$p_i$’ by ‘$(Z\mathbf{p})_i$’.
Then things get interesting. For a fixed set, it was easy to find the probability distribution that maximized the cardinality (or equivalently, the diversity or entropy). For a fixed metric space, it’s not so obvious… but the answer has something to do with the cardinality of the metric space.
We’ll then see how to apply what we know about cardinality of metric spaces to some problems in the ecological literature, and finally, we’ll see how the cardinality of metric spaces was first discovered in ecology.
Surprise, revisited
Imagine you’re taking samples from an ecosystem. So far you’ve found one species of toad, two species of frog, and three species of newt. How surprised would you be if the next individual you found was a frog of a different species? Not very surprised, because although it would be the first time you’d seen that particular species in your ecosystem, you’d already seen lots of similar species. You’d have been much more surprised if, for instance, you’d found a penguin.
Imagine you’re studying the vocabulary used by a particular novelist. You’re tediously going through his work, listing all the words he’s used. So far your data indicates that he writes rather formally, favouring long, Latinate words and avoiding slang. How surprised would you be if the next word you found was ‘erubescent’? Not very surprised, even if it was the first time you’d ever come across the word, because you’d already seen lots of similar words in his books. You’d have been much more surprised if the next word had been ‘bummer’.
Both scenarios show that your surprise at an event depends not only on the (perceived) probability of the event itself, but also on the probability of similar events. We’ll interpret this mathematically as follows: instead of our surprise at finding species $i$ being a function of the probability, $p_i$, it will be a function of the density $\sum_j e^{-d_{i j}} p_j = (Z\mathbf{p})_i$.
Recall from Part 1 that a surprise function is a decreasing function $\sigma: [0, 1] \to [0, \infty]$ with $\sigma(1) = 0$. Any surprise function $\sigma$ gives rise to a diversity measure on finite probability metric spaces: the diversity of $A = (p_1, \ldots, p_n; d)$ is the expected surprise, $\sum_{i = 1}^n p_i \sigma((Z\mathbf{p})_i).$ This is just the same as the formula in Part 1 except that $\sigma(p_i)$ has been changed to $\sigma((Z\mathbf{p})_i)$. In the degenerate case of a discrete metric space, where we’re blind to the similarities between species, it’s no change at all.
With almost no further thought, we can now extend the concepts of $\alpha$-diversity, $\alpha$-cardinality and $\alpha$-entropy from probability spaces to probability metric spaces.
Diversity
Let $\alpha \geq 0$. The $\alpha$-diversity of a probability metric space $A = (p_1, \ldots, p_n; d)$ is $D_\alpha(A) = \sum_{i = 1}^n p_i \sigma_\alpha((Z\mathbf{p})_i) = \left\{ \begin{matrix} \frac{1}{\alpha - 1} \left(1 - \sum p_i (Z\mathbf{p})_i^{\alpha - 1} \right) & if \alpha \neq 1 \\ - \sum p_i \log (Z\mathbf{p})_i & if \alpha = 1. \end{matrix} \right.$ In the case of a discrete metric space, this reduces to what we called $\alpha$-diversity in Part 1.
I’ll run through the usual examples, $\alpha \in \{0, 1, 2\}$. It seems that the easiest to understand is $\alpha = 2$, so I’ll start with that.
Example: $\alpha = 2$ We have $D_2(A) = 1 - \sum_{i, j} p_i e^{-d_{i j}} p_j = \mathbf{p}^t \Delta \mathbf{p},$ where $\mathbf{p}^t$ is the transpose of the column vector $\mathbf{p}$ and $\Delta$ is the $n\times n$ matrix given by $\Delta_{i j} = 1 - e^{-d_{i j}}$. How can we understand this? Well, $e^{-d_{i j}}$ is what I called the similarity between species $i$ and $j$, so $\Delta_{i j}$ might be called the difference. It’s an increasing function of $d_{i j}$, measured on a scale of $0$ to $1$. Choose two individuals at random: then $D_2(A)$ is the expected difference between them.
This quantity $D_2(A)$ is known as Rao’s quadratic diversity index, or Rao’s quadratic entropy, named after the eminent statistician C.R. Rao. In its favour are the straightforward probabilistic interpretation and the mathematically useful fact that it’s a quadratic form.
In the degenerate case of a discrete metric space, $D_2$ is Simpson’s diversity index, described in Part 1.
Example: $\alpha = 1$ We have $D_1(A) = - \sum_{i = 1}^n p_i \log(Z\mathbf{p})_i.$ In the discrete case, $D_1$ is Shannon entropy. In the general case, it would be the cross entropy of $\mathbf{p}$ and $Z\mathbf{p}$, except that cross entropy is usually only defined when both arguments are probability distributions, and this isn’t the case for our second argument, $Z\mathbf{p}$.
Example: $\alpha = 0$ Even $0$-diversity is nontrivial: $D_0(A) = \sum_{i = 1}^n \frac{p_i}{(Z\mathbf{p})_i} - 1.$ In the discrete case, $D_0(A)$ is $n - 1$, the ‘species richness’. In general, the $0$-diversity is highest when, for many $i$, the density at $i$ is not much greater than the probability at $i$ — that is, there are not many individuals of species similar to the $i$th. This fits sensibly with our intuitive notion of diversity.
The family $(D_\alpha)_{\alpha \geq 0}$ of diversity measures was introduced here:
Carlo Ricotta, Laszlo Szeidl, Towards a unifying approach to diversity measures: Bridging the gap between the Shannon entropy and Rao’s quadratic index, Theoretical Population Biology 70 (2006), 237–243
…almost. Actually, there’s a subtle difference.
Aside: what Ricotta and Szeidl actually do Near the beginning of their paper, Ricotta and Szeidl make a convention: that in their metric spaces of species, all distances will be $\leq 1$. This may seem innocuous, if mathematically unnatural, but turns out to be rather significant.
Motivated by ideas about conflict between species, they define, for each $\alpha \geq 0$, a diversity measure for probability metric spaces. At first glance the formula seems different from ours. However, it becomes the same if we rescale the range of allowed distances from $[0, 1]$ to $[0, \infty]$ via the order-preserving bijection $\begin{matrix} [0, \infty] &\leftrightarrow &[0, 1] \\ d &\leftrightarrow &1 - e^{-d}. \end{matrix}$ We’ve already seen the formula $1 - e^{-d}$: earlier, we wrote $\Delta_{i j} = 1 - e^{-d_{i j}}$ and called it the difference between species $i$ and $j$. In other words, the ‘distance’ in Ricotta and Szeidl’s metric space is what I’d call the ‘difference’. It’s easy to show that if the distances $d_{i j}$ define a metric on $\{1, \ldots, n\}$ then so do the differences $\Delta_{i j}$ (but not conversely). The presence of two metrics on the same set can be slightly confusing.
Cardinality and entropy
In Part 1 we applied an invertible transformation to rescale $\alpha$-diversity into something more mathematically convenient, $\alpha$-cardinality. We’ll now apply the same rescaling to obtain a definition of $\alpha$-cardinality in our more sophisticated, metric context. And, as before, we’ll call its logarithm ‘$\alpha$-entropy’.
So, let $\alpha \geq 0$ and let $A = (p_1, \ldots, p_n; d)$ be a probability metric space. Its $\alpha$-cardinality is $|A|_\alpha = 1/\sigma_\alpha^{-1}(D_\alpha(A)) = \left\{ \begin{matrix} \left( \sum_{i = 1}^n p_i (Z\mathbf{p})_i^{\alpha - 1} \right)^{\frac{1}{1 - \alpha}} & if \alpha \neq 1 \\ \prod_{i = 1}^n (Z\mathbf{p})_i^{-p_i} & if \alpha = 1. \end{matrix} \right.$ Its $\alpha$-entropy is $H_\alpha(A) = \left\{ \begin{matrix} \frac{1}{1 - \alpha} \log \left(\sum p_i (Z\mathbf{p})_i^{\alpha - 1}\right) & if \alpha \neq 1 \\ - \sum p_i \log (Z\mathbf{p})_i & if \alpha = 1. \end{matrix} \right.$ The $\alpha$-cardinality is always a real number $\geq 1$; the $\alpha$-diversity and $\alpha$-entropy are always real numbers $\geq 0$.
Example: $\alpha = 0$ The $0$-cardinality of a probability metric space is $\sum p_i/(Z\mathbf{p})_i$.
Example: $\alpha = 1$ As in the discrete case, the $1$-cardinality has a special property not shared by the other $\alpha$-cardinalities: that the cardinality of a convex combination $\lambda A + (1 - \lambda)B$ of probability metric spaces is determined by $\lambda$ and the cardinalities of $A$ and $B$. I’ll leave the details to you, curious reader.
Example: $\alpha = 2$ This is the reciprocal of a quadratic form: $|A|_2 = 1/ \mathbf{p}^t Z \mathbf{p}.$
Example: $\alpha = \infty$ The limit $\lim_{\alpha\to\infty} |A|_\alpha$ exists for every probability metric space $A$; its value, the $\infty$-cardinality of $A$, is $|A|_\infty = 1/\max_i (Z\mathbf{p})_i.$
All the $\alpha$-cardinalities are multiplicative, in the following sense. Any two probability metric spaces $A = (p_1, \ldots, p_n; d)$, $B = (q_1, \ldots, q_m; e)$ have a ‘product’ $A \otimes B$. The underlying set is $\{1, \ldots, n\} \times \{1, \ldots, m\}$. The probability of event $(i, j)$ is $p_i q_j$. The metric is the ‘$d_1$ metric’: if $i, i' \in \{1, \ldots, n\}$ and $j, j' \in \{1, \ldots, m\}$ then the distance from $(i, i')$ to $(j, j')$ is $d_{i i'} + e_{j j'}$. (Viewing metric spaces as enriched categories, this is the usual tensor product.) The multiplicativity result is that for all $\alpha$, $A$ and $B$, $|A \otimes B|_\alpha = |A|_\alpha \times |B|_\alpha.$ Equivalently, $H_\alpha$ is log-like: $H_\alpha(A \otimes B) = H_\alpha(A) + H_\alpha(B)$.
Maximizing diversity
When a species becomes extinct, biodiversity is reduced. Less drastically, diversity decreases when a species becomes rare. Conversely, a highly diverse ecosystem is one containing a wide variety of species, all well-represented, with no one species dominant.
Now suppose you’re in charge of a small ecosystem. The set of species is fixed, but you can control the abundance of each species. What should you do to maximize diversity?
Before we seek a mathematical answer, let’s try to get a feel for it. If there are only two species, the answer is immediate: you want them evenly balanced, $50$–$50$. What if there are three? In the absence of further information, you’d want $33\frac{1}{3}$% of each. But let’s suppose that two of them are rather similar — say, Persian wheat and Polish wheat — and one is relatively different — say, poppies. Then the $33\frac{1}{3}$–$33\frac{1}{3}$– $33\frac{1}{3}$ split would mean that your system was dominated two-to-one by wheat, and that doesn’t seem like maximal diversity. On the other hand, if you regarded the two types of wheat as essentially the same, you’d want to split it $25$–$25$–$50$. And if you acknowledged that the two wheats were different, but not very different, you might choose a compromise such as $30$–$30$–$40$.
Here’s the mathematical question. Fix a finite metric space $A = (\{1, \ldots, n\}, d)$.
Question Let $\alpha \geq 0$. What probability distribution on $A$ maximizes the $\alpha$-cardinality, and what is the maximum value?
It doesn’t matter whether the question is posed in terms of $\alpha$-entropy, $\alpha$-diversity or $\alpha$-cardinality, since the three are related by invertible, order-preserving transformations. (For more on maximizing entropy, see David’s post.) And, incidentally, the analogous question for minimum values is easy: the minimum value is $1$, attained when some $p_i$ is $1$.
Rough Answer Often, the $\alpha$-cardinality is maximized by taking the probability distribution $(w_1/|A|_{MS}, \ldots, w_n/|A|_{MS})$, for any weighting $\mathbf{w}$ on $A$, and the maximum value is the cardinality $|A|_{MS}$ of the metric space $A$.
Thus — finally, as promised! — we see how the concept of the cardinality of a metric space comes naturally from ecological considerations.
Let me say immediately what’s rough about the Rough Answer.
- It’s certainly not true in complete generality — in fact, it doesn’t even make sense in complete generality, since not every finite metric space has a well-defined cardinality.
- It’s not always true even when $A$ does have well-defined cardinality, since there are some metric spaces $A$ for which $|A|_{MS} < 1$, and $1$ is the minimum value.
- Furthermore, there are metric spaces $A$ for which there is a unique weighting $\mathbf{w}$, but $w_i/|A|_{MS} < 0$ for some values of $i$. Then the ‘probability distribution’ in the Rough Answer is not a probability distribution at all.
- And even when $|A|_{MS}$ is defined and $\geq 1$, even when $A$ has a unique weighting $\mathbf{w}$ with $w_i \geq 0$ for all $i$, I don’t know that the Rough Answer is necessarily correct.
So, in what sense is the Rough Answer an answer?
I’ll give a series of partial justifications. Immediately, the Rough Answer is correct when $A$ is discrete: then $|A|_{MS}$ is $n$ (the cardinality of the underlying set), the unique weighting is $(1, \ldots, 1)$, and the probability distribution mentioned is the uniform distribution.
Notation Since we’ve fixed a metric space $A = (\{ 1,\ldots, n\}, d)$ and we’re interested in a varying probability distribution $\mathbf{p} = (p_1, \ldots, p_n)$ on $A$, from now on I’ll write the $\alpha$-cardinality of the probability metric space defined by $A$ and $\mathbf{p}$ as $|\mathbf{p}|_\alpha$.
Fact 1 Let $\mathbf{w}$ be a weighting on $A$ and write $\mathbf{p} = \mathbf{w}/|A|_{MS}$. Then $|\mathbf{p}|_\alpha = |A|_{MS}$, for all $\alpha \geq 0$.
(In order for $\mathbf{p}$ to be a probability distribution, we need all the weights $w_i$ to be non-negative; but even if they’re not, we can still define $|\mathbf{p}|_\alpha$ and the result still holds.)
This is a simple calculation. In particular, the probability distribution $\mathbf{p} = \mathbf{w}/|A|_{MS}$ is invariant: its $\alpha$-cardinality is the same for all $\alpha \in (0, \infty)$. This is a very special property:
Fact 2 Let $I$ be a nonempty subset of $A$ and $\mathbf{w}$ a weighting on $I$ such that $w_i \geq 0$ for all $i$; define a probability distribution $\mathbf{p}$ on $A$ by $p_i = \left\{ \begin{matrix} w_i/|I|_{MS} &if i \in I \\ 0 &otherwise. \end{matrix} \right.$ Then $\mathbf{p}$ is invariant. Moreover, every invariant probability distribution on $A$ arises in this way.
Given the special case of discrete metric spaces (the Theorem at the end of ‘Previously’), it seems reasonable to view the distribution $\mathbf{w}/|A|_{MS}$ on a metric space as the analogue of the uniform distribution on a set.
Fact 3 Let $\mathbf{w}$ be a weighting on $A$. Then for each $\alpha \geq 0$, the function $\begin{matrix} \{\mathbf{p} \in \mathbb{R}^n: \sum p_i = 1\} & \to & \mathbb{R} \\ \mathbf{p} &\mapsto &|\mathbf{p}|_\alpha \end{matrix}$ has a stationary point (critical point) at $\mathbf{p} = \mathbf{w}/|A|_{MS}$.
Again, this holds for all $\alpha$. I do not know under precisely what circumstances this stationary point is a local maximum, or a global maximum, or whether every stationary point is of this form. The best result I know is this:
Fact 4 Suppose that the similarity matrix $Z$ of $A$ is positive-definite, and that $\alpha \geq 2$. Let $\mathbf{w}$ be the unique weighting on $A$, and suppose that all the weights $w_i$ are non-negative. Then the Rough Answer is correct: the maximum value of $|\mathbf{p}|_\alpha$ is $|A|_{MS}$, attained when $\mathbf{p} = \mathbf{w}/|A|_{MS}$.
For example, suppose that $A$ has only three points, as in the wheat and poppies example. Then $Z$ is always positive-definite and the weights are always non-negative, so the $\alpha$-diversity is maximized by growing the three species in proportion to their weights — no matter what value of $\alpha \geq 2$ we choose. Suppose, for instance, that the two species of wheat are distance $1$ apart, and both are distance $10$ from the poppy species. Then a few calculations with a $3\times 3$ matrix tell us that diversity is maximized by growing $29.7$% Persian wheat, $29.7$% Polish wheat and $40.6$% poppies. This fits with our earlier intuitions.
Further partial results along these lines are in
S. Pavoine, S. Ollier, D. Pontier, Measuring diversity from dissimilarities with Rao’s quadratic entropy: Are any dissimilarities suitable? Theoretical Population Biology 67 (2005), 231–239.
(Again, there doesn’t seem to be a free copy online. Apparently it’s just not the done thing in mathematical ecology. Ecologists of the world, rise up! Throw off your chains!)
They also do some case studies with real data, calculating, for instance, how to maximize the 2-diversity of a system of 18 breeds of cattle. They use the concept of the cardinality of a metric space (not by that name). They weren’t the first ecologists to do so… but I’ll save that story until the end.
Does mixing increase diversity?
We tend to think of high-entropy situations as those in which a kind of blandness has been attained through everything being mixed together, like white noise, or snow on a TV screen, or all the pots of paint combined to make brown. In other words, we expect mixing to increase entropy. We probably expect it to increase diversity too.
This principle is often formalized as ‘concavity’. Suppose you own two neighbouring fields, each containing a hundred farmyard animals. You open a gate between the fields, effectively making one big field containing two hundred animals. If one of the original fields contained just sheep and the other contained just cows, you would have combined two ecosystems of diversity $0$ to make one ecosystem of diversity $> 0$. More generally, concavity says that the diversity of the combined ecosystem is at least as great as the mean of the two original diversities.
To be precise, fix a finite metric space $A = (\{1, \ldots, n\}, d)$ and let $D$ be a function assigning to each probability distribution $\mathbf{p}$ on $\{1, \ldots, n\}$ a ‘diversity’ $D(\mathbf{p}) \in \mathbb{R}$. Then $D$ is concave if for all probability distributions $\mathbf{p}$ and $\mathbf{q}$ on $A$ and all $\lambda \in [0, 1]$, $D(\lambda \mathbf{p} + (1 - \lambda) \mathbf{q}) \geq \lambda D(\mathbf{p}) + (1 - \lambda) D(\mathbf{q}).$ This is seen as a desirable property of diversity measures.
Example If the metric space $A$ is discrete then the $\alpha$-diversity measure $D_\alpha$ is concave for every $\alpha \geq 0$. (This is in the paper by Patil and Taillie cited in Part 1.)
Question For a general metric space $A$, is $\alpha$-diversity always concave?
Ricotta and Szeidl, who introduced these diversity measures, were especially interested in the $2$-diversity (Rao’s quadratic index $\mathbf{p}^t \Delta \mathbf{p}$, discussed above). They prove that it’s concave when there are at most $4$ species, and state that when $n \geq 5$, ‘the problem has not been solved’. We can now solve it, at least for $n \geq 6$, by adapting a related counterexample of Tao.
Example Let $A$ be the $6$-point metric space whose difference matrix $\Delta$ is the symmetric matrix $\left( \begin{matrix} 0 &a &a &b &b &b \\ &0 &a &b &b &b \\ & &0 &b &b &b \\ & & &0 &a &b \\ & & & &0 &b \\ & & & & &0 \end{matrix} \right)$ where $a = 9/25$ and $b = 1/5$. (Recall that $\Delta_{i j} = 1 - e^{-d_{i j}}$.) Let $\mathbf{p} = (0, 0, 0, 1/5, 1/5, 3/5), \mathbf{q} = (1/5, 1/5, 1/5, 0, 0, 2/5).$ Then $D_2 \left( \frac{1}{2} (\mathbf{p} + \mathbf{q}) \right) = \frac{191}{1250} < \frac{192}{1250} = \frac{1}{2} ( D_2(\mathbf{p}) + D_2(\mathbf{q}) ).$ So for this space $A$, Rao’s diversity index is not concave. And we can get a similar example with any given number $n \geq 6$ of points, by adjoining new points at distance $\infty$ from everything else.
Ricotta and Szeidl also proved some partial results on the concavity of the diversity measures $D_\alpha$ for values of $\alpha$ other than $2$. But again, $D_\alpha$ is not always concave:
Fact 5 For all $\alpha \geq 0$, there exists a 6-point metric space whose $\alpha$-diversity measure is not concave.
This can be proved by a construction similar to the one for $\alpha = 2$, but taking different values of $a$ and $b$ according to the value of $\alpha$.
How biologists discovered the cardinality of metric spaces
When Christina Cobbold suggested the possibility of a link between the cardinality of metric spaces and diversity measures, I read the paper she gave me, followed up some references, and eventually found — to my astonishment and delight — that the cardinality of a metric space appeared explicitly in the ecology literature:
Andrew Solow, Stephen Polasky, Measuring biological diversity, Environmental and Ecological Statistics 1 (1994), 95–107.
I don’t know how Solow and Polasky would describe themselves, but probably not as category theorists: Solow is director of the Marine Policy Center at Woods Hole Oceanographic Institution, and Polasky is a professor of ecological and environmental economics.
They came to the definition of the cardinality of a metric space by a route completely different from anything discussed so far. They don’t mention entropy. In fact, they almost completely ignore the matter of the abundance of the species present, considering only their similarity. Here’s an outline of the relevant part of their paper.
Preservation of biodiversity is important, but money for it is scarce. So perhaps it’s sensible, or at least realistic, to focus our conservation efforts on those ecosystems most likely to benefit human beings. As they put it,
The discussion so far has essentially assumed that diversity is desirable and has focused on constructing a measure with reasonable properties. A different view is that it is not so much diversity per se that is valuable, but the benefits that diversity provides.
So they embark on a statistical analysis of the likely benefits of high biodiversity, using, for the sake of concreteness, the idea that every species has a certain probability of providing a cure for a disease. Their species form a finite metric space, and similar species are assumed to have similar probabilities of providing a cure.
With this model set up, they compute a lower bound for the probability that a cure exists somewhere in the ecosystem. The bound is an increasing function of a quantity that they pithily describe as the ‘effective number of species’… and that’s exactly the cardinality of the metric space!
Despite the fact that the cardinality appears only as a term in a lower bound, they realize that it’s an interesting quantity and discuss it for a page or so, making some of the same conjectures that I made again fifteen years later: that cardinality is ‘monotone in distance’ (if you increase the distances in a metric space, you increase the cardinality) and that, for a space with $n \geq 1$ points, the cardinality always lies between $1$ and $n$. Thanks to the help of Simon Willerton and Terence Tao, we now know that both conjectures are false.
Summary
In Part 1 we discussed how to measure a probability space. In Part 2 we discussed how to measure a probability metric space. In both we defined, for each $\alpha \geq 0$, the $\alpha$-diversity of a space (the ‘expected surprise’), the $\alpha$-cardinality (the ‘effective number of points’), and the $\alpha$-entropy (its logarithm).
In Part 1, the uniform distribution played a special role. Given a finite set $A$, the uniform distribution is essentially the only distribution on $A$ with the same $\alpha$-cardinality for all $\alpha > 0$. (‘Essentially’ refers to the fact that you can also take the uniform distribution on a subset of $A$, then extend by $0$.) Moreover, for every $\alpha > 0$, it is the distribution with the maximum $\alpha$-cardinality — namely, the cardinality of the set $A$.
In Part 2, where the set $A$ is equipped with a metric, there is in general no distribution with all these properties. However, there’s something that comes close. When $A$ has a non-negative weighting, we have the ‘weight distribution’ on $A$, in which each point of $A$ has probability proportional to its weight. This is essentially the only distribution on $A$ with the same $\alpha$-cardinality for all $\alpha > 0$. Moreover, under certain further hypotheses, it is also the distribution with the maximum $\alpha$-cardinality — namely, the cardinality of the metric space $A$.
I don’t see the full picture clearly. I don’t know the solution to the general problem of finding the distribution with the maximum cardinality (or entropy, or diversity) on a given metric space. And I don’t know how to incorporate into my intuition the fact that there are some finite metric spaces with negative cardinality, and others with cardinality greater than the number of points.
I also don’t know how one would generalize from distributions on finite metric spaces to distributions on, say, compact or complete metric spaces — but that’s something for another day.
Re: Entropy, Diversity and Cardinality (Part 2)
Wow!
(Apologies for a content-free comment, but you know it’s going to take a while for the rest of us to digest all that…)