## November 12, 2013

### Four New Talks

#### Posted by Tom Leinster In October I did little but talk. Five talks in five locations in 23 days, with only one duplicate among them, left me heartily wishing not to hear my own voice for a while.

Having gone to the effort of making slides, I might as well share them publicly. All the talks are on topics that have come up on the Café before. Here they are:

Posted at November 12, 2013 4:02 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2670

### Re: Four New Talks

With regards to your Eventual Image talk, I’ve been trying to use categories like the $R$ given below and wonder how your analysis applies to it and whether other categories with rooted objects behave in a similar fashion (and if $R$ is in the literature).

For $p$ a pointed set and $f$ an endofuction on $p$, say that $f$ is rooted if every element of $p$ can be reached from its point by applications of $f$. A rooted endofunction can be represented as a lasso with a leader of length $n$ and loop of size $m$ - a pair $(n, m)$

The category $R$ with objects being all rooted endofunctions on all pointed sets (pairs $(p, f)$ or $(n, m)$) is a lattice where:

$\bot = (1, id) = (0, 1)$

$\top = (N, succ) = (\infty, \infty)$

$(n_{1}, m_{1}) \vee (n_{2}, m{_2}) = (max(n_{1},n_{2}), lcm(m_{1}, m_{2}))$

$(n_{1}, m_{1}) \wedge (n_{2}, m_{2}) = (min(n_{1},n_{2}), gcd(m_{1}, m_{2}))$

$R$ has atoms. $(1, 1)$ is one, as are all objects $(0, p)$ where $p$ is prime. $R$ is not atomistic (all objects cannot be given as a join of atoms) because it also has the semi-atoms:

$(n, 1)$ for $n\geq 2$

$(0, m*p)$ for prime multiples.

I guess that $R$ is semi-atomistic.

Posted by: RodMcGuire on November 13, 2013 7:07 PM | Permalink | Reply to this

### Re: Four New Talks

Hmm, I don’t know about that. Does your category satisfy the hypotheses of the theorem in section 5 of my talk? (The slides aren’t currently accessible online, as our IT staff are doing some maintenance work, but they should reappear soon.)

On another eventual-image point, I’ve just discovered (courtesy of Vidit Nanda) a highly relevant paper:

Marian Mrozek, Normal functors and retractors in categories of endomorphisms. Universitatis Iagellonicae Acta Mathematica 29 (1992), 181–198.

It’s detailed enough that I’m not immediately able to grasp the main point, but clearly it has a great deal in common with my work on this. I should spend some time with it. Maybe it would also be useful for your purposes, Rod?

Posted by: Tom Leinster on November 15, 2013 10:03 PM | Permalink | Reply to this

### Re: Four New Talks

Just wanted to say that I quite enjoyed the public talk on entropy. It simultaneously inspires me to make my talks and lectures accessible–I can’t think why almost all presentations shouldn’t follow this model–and depresses me to think of all the times my audience was probably lost! So is there a quick explanation as to why the exponential of entropy is used? I guess entropy (this version) would range from 0 to $\ln(n).$

Posted by: sforcey on November 20, 2013 7:08 PM | Permalink | Reply to this

### Re: Four New Talks

Thanks very much. I don’t know whether most “public” talks are like this, but what was predicted to happen at mine and what actually did happen was that many of the audience had some university connection. Quite a few colleagues from my department came, some maths undergraduates, some students from computer science, some people from ecology, and so on. I don’t know what the overall balance was, but my impression was that at most half of them were genuine public, if you see what I mean.

As for entropy: for readers too lazy to click, I was using the quantity

$D(p) = 1/p_1^{p_1} p_2^{p_2} \cdots p_n^{p_n},$

where $p = (p_1, \ldots, p_n)$ is a probability distribution. This is the exponential of Shannon entropy,

$H(p) = -\sum_{i = 1}^n p_i \log(p_i).$

Whereas $D(p)$ ranges between $1$ and $n$, the genuine Shannon entropy $H(p)$ ranges between $0$ and $\log(n)$. (Here “exponential” and “log” can be to any base you like; it doesn’t matter.)

A reason to preferr $D(p)$ over $H(p)$ is that it’s an effective number. This means that if you have a population of $n$ equally abundant species, $p = (1/n, \ldots, 1/n)$, then the quantity assigned to it is $n$. If you’re trying to measure diversity (and this is what $D$ stands for), that’s important.

To explain why, I’ll steal an example from Lou Jost. Suppose a continent contains a million species of plant, all equally abundant. A meteor strikes and wipes out 99% of the species entirely, leaving the other 1% untouched. The diversity $D(p)$ drops by 99%, as you’d expect. But the Shannon entropy drops by only 33%.

So percentage changes of Shannon entropy aren’t useful or meaningful, because it’s not an effective number. But when you do have an effective number, such as $D$, you can reasonably interpret a conclusion such as “for this population, $D(p) = 12.2$” to mean “our community is about as diverse as a population of 12 equally abundant species”, or more briefly “we’ve effectively got 12 species”.

Posted by: Tom Leinster on November 20, 2013 7:35 PM | Permalink | Reply to this

Post a New Comment