Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

November 12, 2013

Four New Talks

Posted by Tom Leinster

In October I did little but talk. Five talks in five locations in 23 days, with only one duplicate among them, left me heartily wishing not to hear my own voice for a while.

Having gone to the effort of making slides, I might as well share them publicly. All the talks are on topics that have come up on the Café before. Here they are:

Posted at November 12, 2013 4:02 PM UTC

TrackBack URL for this Entry:   https://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2670

4 Comments & 0 Trackbacks

Re: Four New Talks

With regards to your Eventual Image talk, I’ve been trying to use categories like the RR given below and wonder how your analysis applies to it and whether other categories with rooted objects behave in a similar fashion (and if RR is in the literature).

For pp a pointed set and ff an endofuction on pp, say that ff is rooted if every element of pp can be reached from its point by applications of ff. A rooted endofunction can be represented as a lasso with a leader of length nn and loop of size mm - a pair (n,m)(n, m)

The category RR with objects being all rooted endofunctions on all pointed sets (pairs (p,f)(p, f) or (n,m)(n, m)) is a lattice where:

=(1,id)=(0,1)\bot = (1, id) = (0, 1)

=(N,succ)=(,)\top = (N, succ) = (\infty, \infty)

(n 1,m 1)(n 2,m2)=(max(n 1,n 2),lcm(m 1,m 2))(n_{1}, m_{1}) \vee (n_{2}, m{_2}) = (max(n_{1},n_{2}), lcm(m_{1}, m_{2}))

(n 1,m 1)(n 2,m 2)=(min(n 1,n 2),gcd(m 1,m 2))(n_{1}, m_{1}) \wedge (n_{2}, m_{2}) = (min(n_{1},n_{2}), gcd(m_{1}, m_{2}))

RR has atoms. (1,1)(1, 1) is one, as are all objects (0,p)(0, p) where pp is prime. RR is not atomistic (all objects cannot be given as a join of atoms) because it also has the semi-atoms:

(n,1)(n, 1) for n2n\geq 2

(0,m*p)(0, m*p) for prime multiples.

I guess that RR is semi-atomistic.

Posted by: RodMcGuire on November 13, 2013 7:07 PM | Permalink | Reply to this

Re: Four New Talks

Hmm, I don’t know about that. Does your category satisfy the hypotheses of the theorem in section 5 of my talk? (The slides aren’t currently accessible online, as our IT staff are doing some maintenance work, but they should reappear soon.)

On another eventual-image point, I’ve just discovered (courtesy of Vidit Nanda) a highly relevant paper:

Marian Mrozek, Normal functors and retractors in categories of endomorphisms. Universitatis Iagellonicae Acta Mathematica 29 (1992), 181–198.

It’s detailed enough that I’m not immediately able to grasp the main point, but clearly it has a great deal in common with my work on this. I should spend some time with it. Maybe it would also be useful for your purposes, Rod?

Posted by: Tom Leinster on November 15, 2013 10:03 PM | Permalink | Reply to this

Re: Four New Talks

Just wanted to say that I quite enjoyed the public talk on entropy. It simultaneously inspires me to make my talks and lectures accessible–I can’t think why almost all presentations shouldn’t follow this model–and depresses me to think of all the times my audience was probably lost! So is there a quick explanation as to why the exponential of entropy is used? I guess entropy (this version) would range from 0 to ln(n).\ln(n).

Posted by: sforcey on November 20, 2013 7:08 PM | Permalink | Reply to this

Re: Four New Talks

Thanks very much. I don’t know whether most “public” talks are like this, but what was predicted to happen at mine and what actually did happen was that many of the audience had some university connection. Quite a few colleagues from my department came, some maths undergraduates, some students from computer science, some people from ecology, and so on. I don’t know what the overall balance was, but my impression was that at most half of them were genuine public, if you see what I mean.

As for entropy: for readers too lazy to click, I was using the quantity

D(p)=1/p 1 p 1p 2 p 2p n p n, D(p) = 1/p_1^{p_1} p_2^{p_2} \cdots p_n^{p_n},

where p=(p 1,,p n)p = (p_1, \ldots, p_n) is a probability distribution. This is the exponential of Shannon entropy,

H(p)= i=1 np ilog(p i). H(p) = -\sum_{i = 1}^n p_i \log(p_i).

Whereas D(p)D(p) ranges between 11 and nn, the genuine Shannon entropy H(p)H(p) ranges between 00 and log(n)\log(n). (Here “exponential” and “log” can be to any base you like; it doesn’t matter.)

A reason to preferr D(p)D(p) over H(p)H(p) is that it’s an effective number. This means that if you have a population of nn equally abundant species, p=(1/n,,1/n)p = (1/n, \ldots, 1/n), then the quantity assigned to it is nn. If you’re trying to measure diversity (and this is what DD stands for), that’s important.

To explain why, I’ll steal an example from Lou Jost. Suppose a continent contains a million species of plant, all equally abundant. A meteor strikes and wipes out 99% of the species entirely, leaving the other 1% untouched. The diversity D(p)D(p) drops by 99%, as you’d expect. But the Shannon entropy drops by only 33%.

So percentage changes of Shannon entropy aren’t useful or meaningful, because it’s not an effective number. But when you do have an effective number, such as DD, you can reasonably interpret a conclusion such as “for this population, D(p)=12.2D(p) = 12.2” to mean “our community is about as diverse as a population of 12 equally abundant species”, or more briefly “we’ve effectively got 12 species”.

Posted by: Tom Leinster on November 20, 2013 7:35 PM | Permalink | Reply to this

Post a New Comment