Entropy, Diversity and Cardinality (Part 1)
Posted by David Corfield
Guest post by Tom Leinster
This is the first of two posts about
- the difficult problem of how to quantify biodiversity
- the concept of the cardinality of a metric space.
The connection is provided by that important and subtle notion, entropy.
The ideas I’ll present depend crucially on the insights of two people. First, André Joyal explained to me the connection between cardinality and entropy. Then, Christina Cobbold told me about the connection between entropy and biodiversity, and suggested that there might be a direct link between measures of biodiversity and the cardinality of a metric space. She was more right than she knew: it turns out that the cardinality of a metric space, which I’d believed to be a new concept coming from enriched category theory, was discovered 15 years ago by ecologists!
Outline
Suppose you’re a scientist investigating the impact of human activities on the biodiversity of some particular ecosystem: say, the forests of Indonesia. To do this rigorously, you’ll need some way of quantifying biodiversity. In other words, you’ll need a way of taking a raw mass of ecological data and turning it into a single number.
Being an open-minded and well-educated scientist, you’d be happy in principle if your ‘number’ lay in some number system of a non-traditional kind (e.g. an abstract rig), but you see that there are certain advantages to sticking with the reals. For example, you can meaningfully say things like ‘the biodiversity of the Indonesian forests has fallen by 23% in 10 years’. We’ll stick to real values here.
It turns out that there are many sensible measures of biodiversity — ecologists have been debating their relative merits for years. Two of the most important aspects of an ecosystem that you might want a diversity measure to reflect are:
- Abundance: the proportions in which the species occur (e.g. 50% grass, 30% clover, 20% daisies)
- Similarity: the extent to which the species are related (e.g. an ecosystem made up of 50% snails and 50% slugs should probably be regarded as less diverse than one made up of 50% snails and 50% bats).
This first post will be about (1) only. We’ll look at a family of diversity measures taking only abundance into account, and use it to explore notions of entropy and cardinality.
The second post will be about (1) and (2) together. There, metric spaces will make their entrance.
Diversity
Diversity is a widely applicable concept: just as you can talk about diversity of species in an ecosystem, you could also talk about diversity of types of rock on a mountain, words in a novelist’s work, etc. Nevertheless, I’ll continue to discuss it in the ecological setting. This is partly because it’s what I’ve thought about most, partly because biodiversity is important, and partly because it lends itself well to vivid imagery.
So let’s imagine an ecosystem in which species occur, in proportions respectively. Thus, . I’ll refer to a finite family of non-negative reals summing to as a finite probability space. (This is a slight abuse of terminology, but never mind.) In the simple example above where the ecosystem consisted of just grass, clover and daisies, we had and .
Some decision has to be made about how exactly the proportions, or ‘relative abundances’, are measured. It could be done according to the number of individuals of each species, or the total mass of each species (so that an ant counts for less than an antelope), or any other measure thought to be helpful. I’ll assume that this decision has been made.
Our task now is to turn the probability space into a single real number, representing the ‘diversity’ of the system. Here are the three ways of doing it most popular among ecologists.
0. Species richness Just count the number of species present. It’s best to then subtract , since an ecosystem containing only one species (e.g. a field containing only wheat) is usefully thought of as having zero diversity. (Ecosystems containing no species at all are off the scale; recall the axiom that .) So with the notation above, the species richness is defined to be .
This measure is not only crude, but also, statistically, very sensitive to sample size. In an ecosystem where most species are rare, even a large sample may fail to detect many species and so drastically underestimate the species richness.
1. Shannon entropy Its relevance to ecology has been described as ‘tenuous’; nevertheless, it is one of the most widely-used measures of biodiversity. The Shannon entropy, or information entropy, or information diversity, of the probability space is We use the convention that when , since . (Alternatively, as every category theorist knows, , so .)
I’ll have much more to say about entropy later. For now, let’s just record some of its basic properties.
First, Shannon entropy is always non-negative. Second, it’s zero if and only if some is and the rest are . In other words, when the ecosystem is made up entirely of one species, diversity is zero. Third, entropy/diversity is maximized (for a fixed ) when all the species occur in equal abundance: . In that case, the entropy is .
2. Simpson diversity Another simple diversity measure born in the 1940s is Simpson diversity, Like Shannon entropy, it’s always non-negative, it’s zero if and only if some is , and it’s maximized (for fixed ) when . In that case, its value is ; and as we would expect of a measure of diversity, this is an increasing function of .
Simpson diversity has the advantage of being quadratic, which makes it amenable to methods of multilinear algebra. It also has certain statistical advantages (such as the existence of an unbiased estimator). And as these notes point out, it’s the probability that two randomly-chosen individuals are of different species.
I learned about diversity measures from
Carlo Ricotta, Laszlo Szeidl, Towards a unifying approach to diversity measures: Bridging the gap between the Shannon entropy and Rao’s quadratic index, Theoretical Population Biology 70 (2006), 237–243
and Chapter 7 of
Russell Lande, Steinar Engen, Bernt-Erik Sæther, Stochastic Population Dynamics in Ecology and Conservation, Oxford University Press, 2003
(both courtesy of Christina Cobbold). Unfortunately there seems to be no free copy of either online, but the Wikipedia article on diversity measures gives some basic information.
Bringing them all together The crucial observation now is that all three of these useful diversity measures are members of a continuous, one-parameter family of measures.
To understand this, it can help to think in terms of ‘surprise’. How surprised would you be if you found an ant on an ant farm? Not surprised at all. How surprised would you be if you found a Yangtze river dolphin in the Yangtze? Sadly, you should be extremely surprised. A surprise function assigns to each probability a degree of surprise ; we require to be decreasing (so that more probable events are less surprising) and satisfy (so that the occurrence of an event of probability is no surprise at all).
From any surprise function, you can obtain a measure of diversity. If the surprise function is called , the diversity measure assigns to a probability space the quantity — the expected surprise. ‘Expected surprise’ might sound paradoxical, but it’s not. How surprised do you expect to be tomorrow? Personally, I expect to be mildly surprised but not astonished: that’s what most of my days are like. In an ecosystem containing only one species, you’ll never be at all surprised at what you find, so your expected surprise is ; correspondingly, . On the other hand, imagine picking individuals at random from an ecosystem containing species in equal proportions: then you’ll always be a bit surprised at what you find. (Sometimes, as here, the ‘surprise’ metaphor seems a bit strained. You can think instead of related concepts such as unpredictability, rarity, or information content.)
Now let’s define that one-parameter family of diversity measures. First define, for each , a surprise function by The reason for the second clause is that the first doesn’t make sense when , but (by l’Hôpital’s rule, or by evaluating in two different ways.) The resulting diversity measure is given by In particular,
- , the species richness
- , the Shannon entropy
- , the Simpson diversity.
The measures have good basic properties, at least for . We always have , with equality if and only if some is . For fixed , is maximized when , and in that case its value is These expressions aren’t very convenient. We’ll fix that later.
This one-parameter family of measures appears to have been discovered several times over. According to the paper of Ricotta and Szeidl cited above, it was discovered in information theory —
J. Aczél, Z. Daróczy, On Measures of Information and their Characterizations, Academic Press (1975)
— then independently in ecology —
G.P. Patil, C. Taillie, Diversity as a concept and its measurement, Journal of the American Statistical Association 77 (1982), 548–567
— and then again independently in physics —
Constantino Tsallis, Possible generalization of Boltzmann–Gibbs statistics, Journal of Statistical Physics 52 (1998), 479–487.
(I haven’t looked up these sources.) Accordingly, people in different disciplines attribute it differently - e.g. physicists seem to call it Tsallis entropy.
Sometimes these diversity measures are referred to as ‘entropy’ of degree . But I want to reserve the term ‘entropy’, as I’ll explain in a moment.
Entropy
There are many related quantities called ‘entropy’. The notion appears in physics, communications engineering, statistics, linguistics, dynamical systems, …, as well as ecology; it can be thought of as measuring disorder, information content, uncertainty, uniformity, diversity, ….
Stick out your arm in the -Category Café and you’ll knock over the coffee of someone who knows more about entropy than I do. Witness, for instance, this learned conversation of Ben Allen and Chris Hillman; David, in his machine learning days, used exotic-sounding related concepts such as Kullback–Leibler divergence; John and Urs doubtless know all about entropy in physics. But for now, I’ll stick humbly to the example above: the Shannon entropy of a finite probability space .
An important property of Shannon entropy is that it is log-like. In other words, let and be finite probability spaces. There is an obvious ‘product’ space, and the log-like property is this: writing for Shannon entropy, This is one of the things that makes more convenient than the other diversity measures defined above: it is only for that this property holds. I’ll reserve the word ‘entropy’ for log-like measures.
(The proof that is log-like is a harmless exercise, depending on two things. One is that these are probability spaces: . The other is that the function is a derivation: .)
Aside: a question Here’s something I’d like to understand. Given probability spaces and as above, and given such that , there’s a new probability space and we have More generally, there is a symmetric operad given by with composition as follows: if then (A -algebra might be called a ‘convex algebra’; for instance, any convex subset of a real vector space is naturally one.) We have The question is: what’s going on here, abstractly? I’m imagining restating this equation in such a way that there is no mention of the s, and thus, perhaps, finding a good characterization of entropy in operadic terms.
Cardinality
André Joyal explained to me that he likes to think of entropy as something like cardinality. More precisely, the exponential of entropy is like cardinality. To test out this point of view, let’s write for any finite probability space , and call it the cardinality of .
(There are slightly different conventions on Shannon entropy. Because of its applications to digital communication, some people like to take their logarithms to base 2; others use base , as I’m doing here.
Cardinality is independent of this choice.)
Now let’s translate the basic properties of entropy into properties of cardinality, to see if cardinality deserves its name. Since entropy is always non-negative, we always have . We have if and only if some is and the rest are , a situation that can be interpreted as
there is effectively only one species.
For a fixed , the cardinality is maximized when , and in that case ; this situation can be interpreted as
all species are fully present.
Finally, the log-like property of entropy translates as — cardinality is multiplicative.
This is all very satisfactory. Our ‘cardinality’ has the properties that one might intuitively hope for. Furthermore, it corresponds very closely to a well-known and useful measure of diversity, the Shannon entropy . But this is not the only useful measure of diversity: there are, at least, all the other measures (). Is there a corresponding notion of ‘-cardinality’ for every , with the same good properties?
The answer is yes, but that’s not quite obvious. It’s no use defining the -cardinality of to be : for since is not log-like except when , this -cardinality would not be multiplicative. So that wouldn’t be a useful definition.
Let’s take stock. We’ve fixed an , and we’re trying to define, for each finite probability space , its ‘-cardinality’ . We want to do this in such a way that:
- is a function (preferably invertible) of
- if some is
- if
- .
I’ll skip some elementary steps here, but it’s not hard to see that these requirements (in fact, (1) and (4) alone) pretty much force the answer on us. It turns out that we need and to be related by the equation and a small amount of elementary algebra then gives us the definition: the -cardinality of the finite probability space .
(The 1-cardinality is just the cardinality. Here at the -Category Café, we’re used to the convention that ‘1-widget’ means the same as ‘widget’.)
It’s easy to confirm that properties (1)–(5) do indeed hold for -cardinality, for every .
Example: The diversity measure is very simple: . (Recall that the motivation for subtracting was to make a one-species ecosystem have diversity zero.) The -cardinality of is just , the number of species — obviously a useful quantity too!
Example: This is the motivating example: is the Shannon entropy, and -cardinality is cardinality.
Example: We saw that is Simpson diversity: . The -cardinality of is which is also often used as a measurement of diversity (and also sometimes called Simpson diversity).
Example: It’s easy to show that for any probability space , we have . However, if we work with cardinality rather than diversity, something more interesting happens: This (or its reciprocal) is sometimes called the Berger–Parker index, and might as well be written . It has all the good properties (2)—(5) of cardinalities.
Back to entropy
For some purposes it’s preferable to use a measure that’s log-like (as entropy is) rather than multiplicative (as cardinality is). For example, in information theory it’s natural to count how many bits are needed to encode a message, and that’s a log-like measure.
So, for any , let’s define the -entropy of a finite probability space to be The -entropy is usually called the Rényi entropy of order . By construction, each is log-like, takes its minimal value when some is , and, for a fixed , takes its maximum value when . The -entropy is just the Shannon entropy .
Summary, and preview of Part 2
We’ve been discussing ecosystems, which for the purposes of this first post are simply finite sequences of non-negative numbers summing to . We’ve seen three families of measures of ecosystems, each indexed over non-negative real numbers :
- The diversity measures . The values correspond to the most popular diversity measures in ecology. Generally, the measure can be interpreted as ‘expected surprise’.
- The cardinalities . These have excellent mathematical properties, e.g. the -cardinality of an -species ecosystem is always between and , and -cardinality is multiplicative.
- The Rényi entropies . Again, these have excellent mathematical properties, and like Shannon entropy (the case ), they are all log-like.
The three ways of measuring are completely interchangeable: for a fixed , any of the three numbers , and can be derived from any of the others. The formulas above tell you how.
What’s next?
Well, a weak point in everything so far is the extremely crude modelling. We’ve taken the species in our ecosystem to form a mere set: two species are either equal or not, and that’s that. But when you think about biodiversity, about the variety of life, you instinctively grasp that there’s more to it: some species are quite similar, some very different. This should influence the measurement of diversity.
In the second post I’ll explain a way of building this into the model. Specifically, the collection of species in an ecosystem will be modelled as a metric space instead of a mere set. (A set can be regarded as a metric space in which every point is distance from every other point.) This will lead us into connections between biodiversity, entropy and the cardinality of metric spaces.
Re: Entropy, Diversity and Cardinality (Part 1)
Do you choose your surprise function so as to arrive at this expection of mild surprise, or is it chosen independently? Imagine someone who finds life extraordinarily unsurprising or extraordinarily surprising. These would be, I think, unpleasant situations to be in, but would we not say that their surprise functions were not well-tuned?
Or perhaps we organise our lives so that for our chosen surprise function, life is as surprising as we desire. We opt for a quiet life or a hectic one, etc.
I would suspect that a bit of both occurs.
Another ‘pathology’ we might observe is someone taking an event of a certain kind to be much less probable than we consider it to be. Perhaps being amazed to find two people in a group of 26 having the same birthday.