One more post of recipes, and then I’ll get back to math, I promise!

One of my proudest accomplishments that somehow, I successfully taught my children to not just eat, but to *love* vegetables. I think part of that is genetic – neither of them are supertasters. But the other part of it is a combination of training and cooking.

The training side is, I think, simple. Most adults are convinced that vegetables are icky but necessary. They’re wrong. But they actively teach that to their children. They make eating vegetables, even when they’re delicious, into a chore.

The other side is that because most adults think that veggies are icky, they cook them in ways that don’t taste good.

Take one of my favorite vegetables as an example: brussel sprouts. My children will actually *fight* over who gets the last bite of brussel sprouts. When we’re talking about what to make for dinner, they beg me to make them! But when I mention this to most people, they act like I’m absolutely insane: brussel sprouts are gross!

If you take a bunch of brussel sprouts, and throw it into boiling water for 20 minutes, what you get is a stinky, pungent, bitter, mushy, revolting little ball of green awfulness. But, if you slice them thin, and saute them in a smoking hot pan with olive oil, salt and pepper, and a bit of garlic, until the edges start to turn brown, what you get is absolutely amazing: sweet, crisp, and wonderful.

So, what I’m going to share here is a couple of vegetable side dishes I made in the last week, which were fantastic. All of them are roasted vegetables – for some reason, people don’t think about roasting veggies, but it’s often one of the easiest and tastiest ways to cook them.

- Preheat your oven to 450 degrees.
- Take a pound of brussel sprouts, and cut them into quarters.
- Toss them with enough olive out to coat them, but not drench them.
- Sprinkle generously with salt and pepper.
- Spread them out over a baking sheet.
- Put them into the hot oven, for around 10 minutes. After ten minutes, take them and and turn them. If they look really well done and brown on the edges, then they’re done; if not, put them in for up to another 10 minutes.
- Take them out, and toss them with a teaspoon or two of balsamic vinegar.

That’s it – and they’re amazing.

This one I’m particularly proud of. I absolutely love sweet potatoes. But normally, my wife won’t touch them – she thinks they’re gross. But this recipe, she actually voluntarily had multiple helpings! It’s sweet, salty, and spicy all at the same time, in a wonderful balance.

- Take a couple of pounds of sweet potatoes, peel them, and cut them into cubes about 2 inches on a side.
- Toss them with olive oil to coat.
- Mix together 1 teaspoon of salt, 1/4 cup of brown sugar, and one generous teaspoon of kochukaru (korean chili powder).
- Sprinkle it over the oiled sweet potatoes, and toss them so they’re all coated.
- Spread onto a baking sheet, and cook at 350 for about 30 minutes, turning them at least once. They’re done when the outside is nicely browned, and they’ve gotten soft.

- Preheat your oven to 450.
- Take a whole head of cauliflower, and break it into small florets. Put them into a bowl.
- Take a half of an onion, and slice it thin. Toss the onions with the cauliflower.
- Coat the cauliflower and onions with olive oil – don’t drench them, but make sure that they’ve got a nice coat.
- Sprinkle generously with salt and pepper.
- Spread onto a baking sheet, and into the oven.
- After 10 minutes, take them out, and turn them, then back in for another 10 minutes.

All three of these got eaten not just by adults, but by kids. The brussel sprouts and sweet potatoes were eaten not just by my kids, but by the kids of other people too, so it’s not just the crazy Chu-Carroll’s who thought they were delicious!

I am on quasi-vacation this week (just staying up with email); hence no posts. But today I crashed a meeting in Napa Valley hosted by Wechsler (KIPAC), Conroy (UCSC), and others. I saw just a few talks, but they were excellent: Jeremiah Murphy (UFl) on supernovae explosions, Conroy on abundance anomalies on globular clusters, Blanton (NYU) on photometry, Finkbeiner (CfA) on photometric calibration, and Sarah Tuttle (UT) on the *HETDEX* spectrograph hardware. Great stuff.

Murphy showed us that there are crazy neutrino dynamics in the first fraction of a second in a supernova explosion; in particular there should be stellar oscillations imprinted on the neutrino signal! Conroy showed that there are light-element *vs* heavy-element abundance anti-correlations in essentially all globular clusters, and indications that some stars are very over-rich in helium. There is no good explanation. Blanton went carefully through the properties of astronomical imaging and photometry, for *two hours*. I loved it, and at the end, Kollmeier (OCIW) said she wanted more! Finbeiner showed that *PanSTARRS* and *SDSS* have great, precise, consistent photometry, and the calibration is all, entirely, self-calibration. This justifies strongly things I said at AAS this year. Tuttle talked about trade-offs in hardware design. The mass production of spectrographs for *HETDEX* is a huge engineering challenge.

By the way, sometime yesterday this blog received its millionth visit.

For something I’m writing I looked up a newspaper article I was interviewed in in, from June 7, 1989. Here’s what I had to say: Ellenberg on mathematics: “I always think of it — this is kind of crazy — as a zoo. There are a million different mathematical objects. They are like animals. Some […]

For something I’m writing I looked up a newspaper article I was interviewed in in, from June 7, 1989. Here’s what I had to say:

Ellenberg on mathematics: “I always think of it — this is kind of crazy — as a zoo. There are a million different mathematical objects. They are like animals. Some are like each other and some are unalike, and they are all objects . . . . There are things in different guises. The amazing thing is, it all connects. Anything you prove with trig[onometry] is just as true if you do it with algebra . . . . I think it is kind of amazing actually, if you think of it from an emotional point of view.”

On learning math: “My feeling is that a lot of people expect not to be good at math. If you see calculus and trig, to a seventh-grader, they see it as something very difficult and very arcane, when maybe the trick is to relax a little bit . . . . Many things you can understand on two levels. If you look at a novel, a novel can be very hard to interpret, but you can still read it and see what happened. With math, there is no real surface level. It is already written in a sort of obscure language. You don’t have the comforting template. You only have the deep structure, and that can be very off-putting.”

On the practicality of math: “Why is it important to have read any Shakespeare for your everyday life? To tell the truth, I can get through the day without ever using a Shakespeare quote, but I think Shakespeare is useful, and I think math is useful.”

What a strange experience, looking at this. In a way I seem very mentally disorganized. But at the same time this is recognizably me. Unsettling.

Not even going to link to this article but this is so magnificently dumb I had to share it with someone. As everyone knows by now, GM’s entry into the electric car market–the Chevy Volt–costs $41,000 before tax breaks. After the tax breaks, you can happily drive one off the lot for $33,000 … if you […]

Not even going to link to this article but this is so magnificently dumb I had to share it with someone.

As everyone knows by now, GM’s entry into the electric car market–the Chevy Volt–costs $41,000 before tax breaks. After the tax breaks, you can happily drive one off the lot for $33,000 … if you can ignore those guilt pangs knowing your fellow Americans have chipped in $8,000 to your new ride.

*Guest post by Christina Vasilakopoulou*

In the eighth installment of the Kan Extension Seminar, we discuss the paper “Elementary Observations on 2-Categorical Limits” by G.M. Kelly, published in 1989. Even though Kelly’s classic book Basic Concepts of Enriched Category Theory, which contains the abstract theory related to indexed (or weighted) limits for arbitrary $\mathcal{V}$-categories, was available since 1982, the existence of the present article is well-justifiable.

On the one hand, it constitutes an independent account of the fundamental case $\mathcal{V}$=$\mathbf{Cat}$, thus it motivates and exemplifies the more general framework through a more gentle, yet meaningful exposition of 2-categorical limits. The explicit construction of specific notable finite limits such as inserters, equifiers etc. promotes the comprehension of the definitions, via a hands-on description. Moreover, these finite limits and particular results concerning 2-categories rather than general enriched categories, such as the construction of the cotensor as a PIE limit, are central for the theory of 2-categories. Lastly, by introducing indexed lax and pseudo limits along with Street’s bilimits, and providing appropriate lax/ pseudo/ bicategorical completeness results, the paper serves also as an indespensable reference for the later “2-Dimensional Monad Theory” by Blackwell, Kelly and Power.

I would like to take this opportunity to thank Emily as well as all the other participants of the Kan Extension Seminar. This has been a unique experience of constant motivation and inspiration for me!

Presently, our base of enrichment is the cartesian monoidal closed category $\mathbf{Cat}$ of (small) categories, with the usual adjunction $-\times\mathcal{A}\dashv[\mathcal{A},-]$. The very definition of an indexed limit requires a good command of the basic $\mathbf{Cat}$-categorical notions, as seen for example in “Review of the Elements of 2-categories” by Kelly and Street. In particular, a 2-natural transformation $\alpha:G\Rightarrow H$ between 2-functors consists of components which not only satisfy the usual naturality condition, but also the 2-naturality one expressing compatibility with 2-cells. Moreover, a modification between 2-natural transformations $m:\alpha\Rrightarrow\beta$ has components families of 2-cells $m_A:\alpha_A\Rightarrow\beta_A:GA\to HA$ compatible with the mapped 1-cells of the domain 2-category, i.e. $m_B\cdot Gf=Hf\cdot m_A$ (where $\cdot$ is whiskering).

A 2-functor $F:\mathcal{K}\to\mathbf{Cat}$ is called representable, when there exists a 2-natural isomorphism $\alpha:\mathcal{K}(K,-)\xrightarrow{\quad\sim\quad}F.$ The components of this isomorphism are $\alpha_A:\mathcal{K}(K,A)\cong FA$ in $\mathbf{Cat}$, and the unit of the representation is the corresponding `element’ $\mathbf{1}\to FK$ via Yoneda.

For a general complete symmetric monoidal closed category $\mathcal{V}$, the usual functor category $[\mathcal{A},\mathcal{B}]$ for two $\mathcal{V}$-categories is endowed with the structure of a $\mathcal{V}$-category itself, with hom-objects ends $[\mathcal{A},\mathcal{B}](T,S)=\int_{A\in\mathcal{A}} \mathcal{B}(TA,SA)$ (which exist at least when $\mathcal{A}$ is small). In our context of $\mathcal{V}$=$\mathbf{Cat}$ it is not necessary to employ ends and coends at all, and the hom-category $[\mathcal{K},\mathcal{L}](G,H)$ of the functor 2-category is evidently the category of 2-natural transformations and modifications. However, we note that computations via (co)ends simplify and are essential for constructions and (co)completeness results for enrichment in general monoidal categories.

To briefly motivate the definition of a weighted limit, recall that an ordinary limit of a ($\mathbf{Set}$-) functor $G:\mathcal{P}\to\mathcal{C}$ is characterized by an isomorphism $\mathcal{C}(C,limG)\cong[\mathcal{P},\mathcal{C}](\Delta C, G)$ natural in $C$, where $\Delta C:\mathcal{P}\to\mathcal{C}$ is the constant functor on the object $C$. In other words, the limit is the representing object of the presheaf $[\mathcal{P},\mathcal{C}](\Delta -,G):\mathcal{C}^\op\to\mathbf{Set}.$ Since components of a natural transformation $\Delta C\Rightarrow G$ (i.e. cones) can be viewed as components of a natural $\Delta\mathbf{1}\Rightarrow\mathcal{C}(C,G-):\mathcal{C}\to\mathbf{Set}$, the above defining isomorphism can be written as $\mathcal{C}(C,\mathrm{lim}G)\cong[\mathcal{P},\mathbf{Set}](\Delta\mathbf{1},\mathcal{C}(C,G-)).$ In this form, ordinary limits can be easily seen as particular examples of conical indexed limits for $\mathcal{V}$=$\mathbf{Set}$, and we are able to generalize the concept of a limit by replacing the functor $\Delta\mathbf{1}$ by an arbitrary functor (weight) $\mathcal{C}\to\mathbf{Set}$.

We may thus think of a 2-functor $F:\mathcal{P}\to\mathbf{Cat}$ as a (small) *indexing type* or *weight*, and a 2-functor $G:\mathcal{P}\to\mathcal{K}$ as a *diagram in $\mathcal{K}$ of shape $\mathcal{P}$*:
$\begin{matrix}
& \mathbf{Cat}\quad \\
{}^{weight} \nearrow_{F} & \\
\mathcal{P} & \overset{G}\underset{diagram}{\rightarrow} & \mathcal{K}. \\
\end{matrix}$
The 2-functor $G$ gives rise to a 2-functor
$\int_p [Fp,\mathcal{K}(-,Gp)]=[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\; \mathcal{K}^\op\longrightarrow\mathbf{Cat}$
which maps a 0-cell $A$ to the category $[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G))$. A representation of this contravariant 2-functor is an object $\{F,K\}\in\mathcal{K}$ along with 2-natural isomorphism
$\mathcal{K}(-,\{F,G\})\xrightarrow{\;\sim\;}[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G))$
with components isomorphisms between categories
$\mathcal{K}(A,\{F,G\})\cong[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-)).$
The unit of this representation is
$\mathbf{1}\to[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(\{F,G\},G))$ which corresponds uniquely to a 2-natural transformation
$\xi:F\Rightarrow\mathcal{K}(\{F,G\},G)$.

Via this 2-natural isomorphism, the object $\{F,G\}$ in $\mathcal{K}$ satisfies a universal property which can be expressed in two levels:

The 1-dimensional aspect of the universal property states that every natural transformation $\rho$ factorizes as $\begin{matrix} F \xrightarrow{\rho} & \mathcal{K}(A,G) \\ {}_\xi \searrow & \uparr_{\mathcal{K}(h,1)} \\ & \mathcal{K}(\{F,G\},G) \\ \end{matrix}$ for a unique 1-cell $h:A\to\{F,G\}$ in $\mathcal{K}$, where the vertical arrow is just pre-composition with $h$.

The 2-dimensional aspect of the universal property states that every modification $\theta:\rho\Rrightarrow\rho'$ factorizes as $\mathcal{K}(\alpha,1)\cdot \xi$ for a unique 2-cell $\alpha:h\Rightarrow h'$ in $\mathcal{K}$.

The fact that the 2-dimensional aspect (which asserts an isomorphism of categories) does not in general follow from the 1-dimensional aspect (which asserts a bijection between the hom-sets of the underlying categories) is a recurrent issue of the paper. In fact, things would be different if the *underlying category functor*
$\mathcal{V}(I,-)=(\;)_0:\mathcal{V}\text{-}\mathbf{Cat}\to\mathbf{Cat}$
were conservative, in which case the 2-dimensional universal property would always imply the 1-dimensional one. Certainly though, this is not the case for $\mathcal{V}$=$\mathbf{Cat}$: the respective functor discards all the 2-cells and is not even faithful. However, if we know that a weighted limit exists, then the first level of the universal property suffices to detect it up to isomorphism.

A 2-category $\mathcal{K}$ is *complete* when all limits $\{F,G\}$ exist. The defining 2-natural isomorphism extends the mapping $(F,G)\mapsto\{F,G\}$ into a functor of two variables (the *weighted limit functor*)
$\{-,-\}:[\mathcal{P},\mathbf{Cat}]^{op}\times[\mathcal{P},\mathcal{K}]\longrightarrow \mathcal{K}$
as the left parametrized adjoint (actually its opposite) of the functor $\mathcal{K}(-,?):\mathcal{K}^{op}\times[\mathcal{P},\mathcal{K}]\to[\mathcal{P},\mathbf{Cat}]$
mapping an object $A$ and a functor $G$ to $\mathcal{K}(A,G-)$.
A colimit in $\mathcal{K}$ is a limit in $\mathcal{K}^op$, and the *weighted colimit functor* is
$-\ast-:[\mathcal{P}^op,\mathbf{Cat}]\times[\mathcal{P},\mathcal{K}]\longrightarrow\mathcal{K}.$
Apart from the evident duality, we observe that often colimits are harder to compute than limits. This may partially be due to the fact that $\{F,G\}$ is determined by the representable $\mathcal{K}(-,\{F,G\})$ which gives generalized elements of $\{F,G\}$, whereas the description of
$\mathcal{K}(F\ast G,-)$ gives us arrows out of $F\ast G$. For example, limits in $\mathbf{Cat}$ are easy to compute via
$[\mathcal{A},\{F,G\}]\cong[\mathcal{P},\mathbf{Cat}](F,[\mathcal{A},G-])\cong[\mathcal{A},[\mathcal{P},\mathbf{Cat}](F,G)]$
and in particular, taking $\mathcal{A}=\mathbf{1}$ gives us the objects of the category $\{F,G\}$ and $\mathcal{A}=\mathbf{2}$ gives us the morphisms. On the contrary, colimits in $\mathbf{Cat}$ are not straightforward (except than their property $F\ast G\cong G\ast F$).

Notice that like ordinary limits are defined, via representability, in terms of limits in $\mathbf{Set}$, we can define weighted limits in terms of limits of representables in $\mathbf{Cat}$: $\mathcal{K}(A,\{F,G\})\cong\{F,\mathcal{K}(A,G-)\},\quad\mathcal{K}(F\ast ,G,A)\cong\{F,\mathcal{K}(G-,A)\}.$ On the other hand, if the weights are representables, via Yoneda lemma we get $\{\mathcal{P}(P,-),G\}\cong GP, \qquad \mathcal{P}(-,P)\ast G\cong GP.$

The main result for general $\mathcal{V}$-completeness in Kelly’s book says that a $\mathcal{V}$-enriched category is complete if and only if it admits all conical limits (equivalently, products and equalizers) and cotensor products. Explicitly, conical limits are those with weight the constant $\mathcal{V}$-functor $\Delta I$, whereas cotensors are those where the domain enriched category $\mathcal{P}$ is the unit category $\mathbf{1}$, hence the weight and the diagram are determined by objects in $\mathcal{V}$ and $\mathcal{K}$ respectively. Once again, for $\mathcal{V}$=$\mathbf{Cat}$ an elementary description of both limits is possible.

Notice that when a 2-category admits tensor products of the form $\mathbf{2}\ast A$, then the 2-dimensional universal property follows from the 1-dimensional for every limit, because of conservativity of the functor $\mathbf{Cat}_0(\mathbf{2},-)$ and the definition of tensors. Moreover, the former also implies that the category $\mathbf{2}$ is a strong generator in $\mathbf{Cat}$, hence the existence of only the cotensor $\{\mathbf{2},B\}$ along with conical limits in a 2-category $\mathcal{K}$ is enough to deduce 2-completeness.

$\mathbf{Cat}$ itself has cotensor and tensor products, given by $\{\mathcal{A},\mathcal{B}\}=[\mathcal{A},\mathcal{B}]$ and $\mathcal{A}\ast\mathcal{B}=\mathcal{A}\times\mathcal{B}$. It is ultimately also cocomplete, all colimits being constructed from tensors and ordinary colimits in $\mathbf{Cat}_0$ (which give the conical limits in $\mathbf{Cat}$ by the existence of the cotensor $[\mathbf{2},B]$).

If we were to make use of ends and coends, the explicit construction of an arbitrary 2-(co)limit in $\mathcal{K}$ as the (co)equalizer of a pair of arrows between (co)products of (co)tensors coincides with $\{F,G\}=\int_K \{FK,GK\}, \qquad F\ast G=\int^K FK\ast GK.$ Such an approach simplifies the proofs of many useful properties of limits and colimits, such as $\{F,\{G,H\}\}\cong\{F\ast G,H\},\;\;(G\ast F)\ast H\cong F\ast(G\ast H)$ for appropriate 2-functors.

The paper provides the description of some important classes of limits in 2-categories, essentially by exhibiting the unit of the defining representation for each particular case. A table which summarizes the main examples included is the following:

Let’s briefly go through the explicit construction of an inserter in a 2-category $\mathcal{K}$. The weight and diagram shape are as in the first line of the above table, and denote by $B\overset{f}\underset{g}{\rightrightarrows}C$ the image of the diagram in $\mathcal{K}$. The standard technique is to identify the form of objects and morphisms of the functor 2-category $[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G-))$, and then state both aspects of the universal property.

An object is a 2-natural transformation $\alpha:F\Rightarrow\mathcal{K}(A,G-)$ with components $\alpha_\bullet:1\to\mathcal{K}(A,B)$ and $\alpha_\star:\mathbf{2}\to\mathcal{K}(A,C)$ satisfying the usual naturality condition (2-naturality follows trivially, since $\mathcal{P}$ only has the identity 2-cell). This amounts to the following data:

an 1-cell $A\xrightarrow{\alpha_\bullet}B$, i.e. the object in $\mathcal{K}(A,B)$ determined by the functor $\alpha_\bullet$;

a 2-cell ${\alpha_\star0}\overset{\alpha_\star}{\Rightarrow}{\alpha_\star1}$, i.e. the morphism in $\mathcal{K}(A,C)$ determined by the functor $\alpha_\star$;

properties, which make the 1-cells $\alpha_\star0,\alpha_\star1$ factorize as $\alpha_\star0=A\xrightarrow{\alpha_\bullet}B\xrightarrow{f}C$ and $\alpha_\star1=A\xrightarrow{\alpha_\bullet}B\xrightarrow{g}C$.

We can encode the above data by a diagram $\begin{matrix} & B & \\ {}^{\alpha_\bullet} \nearrow && {\searrow}^f \\ A\; & \Downarrow{\alpha_\star}& \quad C. \\ {}_{\alpha_\bullet} \searrow && \nearrow_g \\ & B & \\ \end{matrix}$ Now a morphism is a modification $m:\alpha\Rrightarrow\beta$ between two objects as above. This has components

$m_\bullet:\alpha_\bullet\Rightarrow\beta_\bullet$ in $\mathcal{K}(A,B)$;

$m_\star:\alpha_\star\Rightarrow\beta_\star$ given by 2-cells $m_\star^0:\alpha_\star0\Rightarrow{\beta_\star0}$ and $m_\star^1:\alpha_\star1\Rightarrow\beta_\star1$ in $\mathcal{K}(A,C)$ satisfying naturality $m^1_\star\circ\alpha_\star=\beta_\star\circ m^0_\star$.

The modification condition $m^0_\star=f\cdot m_\bullet$ and $m^1_\star=g\cdot m_\bullet$ gives the components of $m_\star$ as whiskered composites of $m_\bullet$. We can thus express such a modification as a 2-cell $m_\bullet$ satisfying $gm_\bullet\circ\alpha_\star=fm_\bullet\circ\beta_\star$ (graphically expressed by pasting $m_\bullet$ accordingly to the sides of $\alpha_\star,\beta_\star$).

This encoding simplifies the statement of the universal property for $\{F,G\}$, as the object of in $\mathcal{K}$ through which any natural transformation and modification uniquely factorize in an appropriate way (in fact, through the unit $\xi$). A very similar process can be followed for the identification of the other classes of limits. As an illustration, let’s consider some of these limits in the 2-category $\mathbf{Cat}$.

The inserter of two functors $F,G:\mathcal{B}\to\mathcal{C}$ is a category $\mathcal{A}$ with objects pairs $(B,h)$ where $B\in\mathcal{B}$ and $h:FB\to GB$ in $\mathcal{C}$. A morphism $(B,h)\to(B',h')$ is an arrow $f:B\to B'$ in $\mathcal{B}$ such that the following diagram commutes: $\begin{matrix} FB & \overset{Ff}{\longrightarrow} & FB' \\ {}_h\downarrow && \downarrow_{h'} \\ FB & \underset{Gh}{\longrightarrow} & GB'. \\ \end{matrix}$ The functor $\alpha_\bullet=P:\mathcal{A}\to\mathcal{B}$ is just the forgetful functor, and the natural transformation is given by $(\alpha_\star)_{(B,h)}=h$.

The comma-object of two functors $F,G$ is precisely the comma category. If the functors have also the same domain, their inserter is a subcategory of the comma category.

The equifier of two natural transformations $\phi^1,\phi^2:F\Rightarrow G:\mathcal{B}\to\mathcal{C}$ is the full subcategory $\mathcal{A}$ of $\mathcal{B}$ over all objects $B$ such that $\phi^1_B=\phi^2_B$ in $\mathcal{C}$.

There is a variety of constructions of new classes of limits from given ones, coming down to the existence of endo-identifiers, inverters, iso-inserters, comma-objects, iso-comma-objects, lax/ oplax/pseudo limits of arrows and the cotensors $\{\mathbf{2},K\}$, $\{\mathbf{I},K\}$ out of inserters, equifiers and binary products in the 2-category $\mathcal{K}$. Along with the substantial construction of arbitrary cotensors out of these three classes, P(roducts)I(nserters)E(quifiers) limits are established as essential tools, also relatively to categories of algebras for 2-monads. Notice that equalizers are `too tight’ to fit in certain 2-categories of importance such as $\mathbf{Lex}$.

The concept of a weighted 2-limit strongly depends on the specific structure of the 2-category $[\mathcal{P},\mathbf{Cat}]$ of 2-functors, 2-natural transformations and modifications, for the 2-categories $\mathcal{P}$ and $\mathbf{Cat}$. If we alter this structure by considering lax natural transformations or pseudonatural transformations, we obtain the definition of the *lax limit* $\{F,G\}_l$ and *pseudo limit* $\{F,G\}_p$ as the representing objects for the 2-functors
$\begin{matrix}
Lax[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat} \\
Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^\op\to\mathbf{Cat}.
\end{matrix}$
Notice that the functor categories $Lax[\mathcal{P},\mathcal{L}]$ and $Psd[\mathcal{P},\mathcal{L}]$ are 2-categories whenever $\mathcal{L}$ is a 2-category, hence the defining isomorphisms are again between categories as before.

An important remark is that any lax or pseudo limit in $\mathcal{K}$ can be in fact expressed as a `strict’ weighted 2-limit. This is done by replacing the original weight with its image under the left adjoint of the incusion functors $[\mathcal{P},\mathbf{Cat}]\hookrightarrow Lax[\mathcal{P},\mathbf{Cat}]$, $[\mathcal{P},\mathbf{Cat}]\hookrightarrow Psd[\mathcal{P},\mathbf{Cat}]$. The opposite does not hold: for example, inserters and equifiers are neither lax not pseudo limits.

We can relax the notion of limits in 2-categories even further, and define the *bilimit* $\{F,G\}_b$ of 2-functors $F$ and $G$ as the representing object *up to equivalence*:
$\mathcal{K}(A,\{F,G\}_b)\simeq Psd[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(A,G)).$
This is of course a particular case of general bilimits in bicategories, for which $\mathcal{P}$ and $\mathcal{K}$ are requested to be bicategories and $F$ and $G$ homomorphism of bicategories. The above equivalence of categories expresses a birepresentation of the homomorphism $Hom[\mathcal{P},\mathbf{Cat}](F,\mathcal{K}(-,G)):\mathcal{K}^op\to\mathbf{Cat}$.

Evidently, bilimits (firstly introduced by Ross Street) may exist even when pseudo limits do not, since they require an equivalence rather than isomorphism of hom-categories. The following two results sum up the conditions ensuring whether a 2-category has all lax, pseudo and bilimits.

A 2-category with products, inserters and equifiers has all lax and pseudo limits (whereas it may not have all strict 2-limits).

A 2-category with biproducts, biequalizers and bicotensors is

*bicategorically complete*. Equivalently, it admits all bilimits if and only if for all 2-functors $F:\mathcal{P}\to\mathbf{Cat}$, $G:\mathcal{P}\to\mathcal{K}$ from a small ordinary category $\mathcal{P}$, the above mentioned birepresentation exists.

Street’s construction of an arbitrary bilimit requires a descent object of a 3-truncated bicosimplicial object in $\mathcal{K}$. An appropriate modification of the arguments exhibits lax and pseudo limits as PIE limits.

These weaker forms of limits in 2-categories are fundamental for the theory of 2-categories and bicategories. Many important constructions such as the Eilenberg-Moore object as well as the Grothendieck construction on a fibration, arise as lax/oplax limits. They are also crucial in 2-monad theory, for example when studying categories of (strict) algebras with non-strict (pseudo or even lax/oplax) morphisms, which are more common in nature.

Let be an irreducible polynomial in three variables. As is not algebraically closed, the zero set can split into various components of dimension between and . For instance, if , the zero set is a line; more interestingly, if , then is the union of a line and a surface (or the product of an […]

Let be an irreducible polynomial in three variables. As is not algebraically closed, the zero set can split into various components of dimension between and . For instance, if , the zero set is a line; more interestingly, if , then is the union of a line and a surface (or the product of an acnodal cubic curve with a line). We will assume that the -dimensional component is non-empty, thus defining a real surface in . In particular, this hypothesis implies that is not just irreducible over , but is in fact absolutely irreducible (i.e. irreducible over ), since otherwise one could use the complex factorisation of to contain inside the intersection of the complex zero locus of complex polynomial and its complex conjugate, with having no common factor, forcing to be at most one-dimensional. (For instance, in the case , one can take .) Among other things, this makes a Zariski-dense subset of , thus any polynomial identity which holds true at every point of , also holds true on all of . This allows us to easily use tools from algebraic geometry in this real setting, even though the reals are not quite algebraically closed.

The surface is said to be ruled if, for a Zariski open dense set of points , there exists a line through for some non-zero which is completely contained in , thus

for all . Also, a point is said to be a flecnode if there exists a line through for some non-zero which is tangent to to third order, in the sense that

Theorem 1 (Cayley-Salmon theorem)Let be an irreducible polynomial with non-empty. Suppose that a Zariski dense set of points in are flecnodes. Then is a ruled surface.

Among other things, this theorem was used in the celebrated result of Guth and Katz that almost solved the Erdos distance problem in two dimensions, as discussed in this previous blog post. Vanishing to third order is necessary: observe that in a surface of negative curvature, such as the saddle , every point on the surface is tangent to second order to a line (the line in the direction for which the second fundamental form vanishes).

The original proof of the Cayley-Salmon theorem, dating back to at least 1915, is not easily accessible and not written in modern language. A modern proof of this theorem (together with substantial generalisations, for instance to higher dimensions) is given by Landsberg; the proof uses the machinery of modern algebraic geometry. The purpose of this post is to record an alternate proof of the Cayley-Salmon theorem based on classical differential geometry (in particular, the notion of torsion of a curve) and basic ODE methods (in particular, Gronwall’s inequality and the Picard existence theorem). The idea is to “integrate” the lines indicated by the flecnode to produce smooth curves on the surface ; one then uses the vanishing (1) and some basic calculus to conclude that these curves have zero torsion and are thus planar curves. Some further manipulation using (1) (now just to second order instead of third) then shows that these curves are in fact straight lines, giving the ruling on the surface.

Update: Janos Kollar has informed me that the above theorem was essentially known to Monge in 1809; see his recent arXiv note for more details.

I thank Larry Guth and Micha Sharir for conversations leading to this post.

** — 1. Proof — **

Let denote the smooth points of , then is a smooth surface that is a Zariski open dense subset of , and hence Zariski dense in . We consider the projective tangent bundle of ; this is a smooth three-dimensional manifold, which is a bundle of copies of the projective line over , with elements consisting of a point in and the projective class of a direction that is tangent to at and is non-zero. Since and are both irreducible varieties, it is easy to see that is also an irreducible variety.

Inside , we consider the subset of points which obey the flecnode condition (1) for . By hypothesis, the projection of to is Zariski dense. On the other hand, is clearly an algebraic set. Thus the dimension of is at least , and there is at least one component whose projection to is two-dimensional (i.e. is dominant). In particular we can find an irreducible algebraic surface in whose projection to is open dense (not just in the Zariski sense, but also in the differential geometry sense). By removing the singular points of , we may assume that is a smooth surface.

We now claim that the projection map is generically a local diffeomorphism, thus has full rank for a Zariski dense set of points in . This is a simple consequence of Sard’s theorem, but for our purposes it is also instructive to see an ODE proof: if fails to have full rank generically, then it must have rank one generically or rank zero generically. If it has rank one generically, one can use the Picard existence theorem to locally foliate an open dense subset of by curves with the property that for each , the derivative lies in the kernel of , so that if we write , then for all , and so is constant; thus the curves each lie in a single fibre of . This locally describes as a one-dimensional smooth family of curves inside the fibre of , and so the image is locally one-dimensional, contradicting the two-dimensional nature of . A similar argument works when has rank zero generically.

Since is a local diffeomorphism generically, we may apply the inverse function theorem to conclude that on an open dense subset of , we can locally invert this map, which in particular gives *smooth* local maps from open subsets of to unit tangent vectors at such that the flecnode condition (1) is satisfied for all such and .

By the Picard existence theorem, we may thus locally foliate by curves with the property that

for all ; thus has unit speed and is always tangent to a flecnode direction. Thus, by (1) we have

for . Expanding this out in coordinates by the chain rule (and using the usual summation conventions), using to denote the components of , and to denote the first partial derivatives of for , to denote the second partial derivatives, and so forth, we have

We can obtain further differential equations by differentiating the above equations in . For instance, if we differentiate (3) in we obtain

and hence by (4)

Similarly, if we differentiate (4) in we obtain

and hence by (5)

Finally, if we differentiate (6) in we obtain

and hence by (7)

The equations (3), (6), (8) have a simple geometric interpretation: the first three derivatives are all orthogonal to the gradient . Generically, this gradient is non-zero, and we are in three dimensions, so we conclude that are always coplanar. Equivalently, the torsion of the curve vanishes, and hence the curve is necessarily planar (locally, at least). Another way to see this is to start with the identity

where is the cross product, and conclude that is a scalar multiple of whenever it is non-vanishing, which by Gronwall’s inequality shows that has fixed orientation whenever it is non-vanishing.

So there is a plane in in which locally lies. If vanished on this plane, then , being irreducible, would be just and we would be done, so we may assume that is non-vanishing here, thus is at most one-dimensional. On the other hand, (3), (6) show that are both orthogonal to the gradient of restricted to , which is generically non-zero; as we now only have two dimensions, this implies that are parallel. Thus the curvature of now also vanishes, which implies that is a straight line. Hence we have locally foliated at least a small open neighbourhood in by straight lines, which ensures that is ruled as desired.

Filed under: expository, math.AG, math.DG Tagged: Cayley-Salmon theorem, flecnode, ruled surface

Long-time readers of this blog (are there any left ?) know me well since I often used to write posts about personal matters here and in my previous sites. However, I am aware that readers come and go, and I also realize that lately I have not disclosed much of my personal life here; things like where I work, what's my family like, what I do in my pastime, what are my dreams and my projects for the future. So it is a good idea to write some personal details here.

Some good pre-publication reviews are coming in! From Kirkus: Witty and expansive, Ellenberg’s math will leave readers informed, intrigued and armed with plenty of impressive conversation starters. And Booklist (not available online, unfortunately:) Relying on remarkably few technical formulas, Ellenberg writes with humor and verve as he repeatedly demonstrates that mathematics simply extends common sense. He manages to […]

Some good pre-publication reviews are coming in! From Kirkus:

Witty and expansive, Ellenberg’s math will leave readers informed, intrigued and armed with plenty of impressive conversation starters.

And Booklist (not available online, unfortunately:)

Relying on remarkably few technical formulas, Ellenberg writes with humor and verve as he repeatedly demonstrates that mathematics simply extends common sense. He manages to translate even the work of theoretical pioneers such as Cantor and Gödel into the language of intelligent amateurs. The surprises that await readers include not only a discovery of the astonishing versatility of mathematical thinking but also a realization of its very real limits. Mathematics, as it turns out, simply cannot resolve the real-world ambiguities surrounding the Bush-Gore cliff-hanger of 2000, nor can it resolve the much larger question of God’s existence. A bracing encounter with mathematics that matters.

Astonishingly, in the last few weeks, I’ve actually found time to read some– *gasp*– novels. In particular, I finished two books that probably belong in the “Hard SF” genre: A Darkling Sea by James L. Cambias and Lockstep by Karl Schroeder. Both Jim and Karl are people I’ve met many times at cons; I’ve enjoyed a lot of books by Karl, but this is Jim’s first published novel (I think).

I’m lumping these together both because it’s rare for me to get time to read, let along booklog stuff, but also because there’s a sense in which they’re complementary books: Both offer thoroughly fascinating far-future settings by exploring neat ideas on one are of science, but glossing over some others.

A Darkling Sea is set on Ilmatar, a Europa-like world a long way from Earth, on a human research station dedicated to studying the native life at the bottom of an ocean which is itself under a kilometer or so of ice. The crab-like native creatures have highly developed sonar senses, and survive by farming organisms that strain sulfur-based nutrients out of warmer water venting from the planet’s interior. They have a rich and fascinating culture, operating at a sort of pre-Victorian level, and the Ilmataran side of the plot centers on some proto-scientists. On the human side, the plot follows the scientists in the research station, and the fallout when a grandstanding human reporter is killed in an encouter with Ilmatarans, triggering a response from a third race, the Sholen, who have taken it upon themselves to enforce a sort of Prime Directive for everyone. The Sholen have their own interesting biology and culture, though less developed than the Ilmataran.

Lockstep, on the other hand, takes place in an entirely human-derived future, when Toby McGonigal wakes up from cryogenic hibernation to discover that 14,000 years have elapsed since he headed out on a routine mission to explore a comet. He’s revived in the Lockstep culture, where vast networks of interstellar trade are made possible by the practice of “wintering over”: every colony participating in the 360/1 lockstep will spend thirty years in hibernation for every month that they spend awake; this allows travel between worlds to seem like a mere overnight jaunt: travelers go to sleep, travel at sub-light speed to their destination, and wake up for a month or so at the other end, then return to find the folks they left behind still in synch with them. It’s a nifty idea, and the lockstep culture is worked out in some detail (including its role relative to the non-lockstep cultures of the “fast worlds” of the inner Solar System and other stars).

In both cases, the real attraction is the Big Idea behind the setting: the Ilmataran ecosystem and the lockstep culture. Other details are kind of fuzzy– there’s some sort of FTL travel in A Darkling Sea, but all that’s really mentioned about it is that it’s really expensive; and the perfect cryogenic hibernation technology of the locksteps is pure handwavium. But both of those core ideas are worked through in a thorough and thoughtful way, making them a pleasure to read about.

Both books also feature action-movie plots– the human researchers on Ilmatar launch a campaign of resistance against the Sholen, and Toby turns out to be the key to a bunch of family politics in the lockstep, which soon has him on the run not knowing who to trust. And in keeping with the notion that science fiction is always *really* about the era in which it’s written, both books include a good deal of political subtext that isn’t all that “sub.” Lockstep is probably the more polished of the two, as far as the plot goes, but that’s not really the point of either book. These are both squarely in the Asimov/Clarke/Clement/Niven sort of tradition, where the plot is mostly an excuse to explore a really cool world. And the worlds here are, indeed, really cool.

So, you know, if that’s the kind of thing you like, I’m fairly confident you’ll like these. I don’t always go for that sort of thing myself, but I enjoyed both of these.

In December, a result from the Large Underground Xenon (LUX) experiment was featured in *Nature’s Year In Review* as one of the most important scientific results of 2013. As a student who has spent the past four years working on this experiment I will do my best to provide an introduction to this experiment and hopefully answer the question: why all the hype over what turned out to be a null result?

**Direct Dark Matter Detection**

Weakly Interacting Massive Particles (WIMPs), or particles that interact only through the weak nuclear force and gravity, are a particularly compelling solution to the dark matter problem because they arise naturally in many extensions to the Standard Model. Quantum Diaries did a wonderful series last summer on dark matter, located here, so I won’t get into too many details about dark matter or the WIMP “miracle”, but I would however like to spend a bit of time talking about direct dark matter detection.

The Earth experiences a dark matter “wind”, or flux of dark matter passing through it, due to our motion through the dark matter halo of our galaxy. Using standard models for the density and velocity distribution of the dark matter halo, we can calculate that there are nearly 1 billion WIMPs per square meter per second passing through the Earth. In order to match observed relic abundances in the universe, we expect these WIMPs to have a small yet measurable interaction cross-section with ordinary nuclei.

In other words, there must be a small-but-finite probability of an incoming WIMP scattering off a target in a laboratory in such a way that we can detect it. The goal of direct detection experiments is therefore to look for these scattering events. These events are characterized by recoil energies of a few to tens of keV, which is quite small, but it is large enough to produce an observable signal.

So here’s the challenge: How do you build an experiment that can measure an extremely small, extremely rare signal with very high precision amid large amounts of background?

**Why Xenon?**

The signal from a recoil event inside a direct detection target typically takes one of three forms: scintillation light, ionization of an atom inside the target, or heat energy (phonons). Most direct detection experiments focus on one (or two) of these channels.

Xenon is a natural choice for a direct detection medium because it is easy to read out signals from two of these channels. Energy deposited in the scintillation channel is easily detectable because xenon is transparent to its own characteristic 175-nm scintillation. Energy deposited in the ionization channel is likewise easily detectable, since ionization electrons under the influence of an applied electric field can drift through xenon for distances up to several meters. These electrons can then be read out by any one of a couple different charge readout schemes.

Furthermore, the ratio of the energy deposited in these two channels is a powerful tool for discriminating between nuclear recoils such as WIMPs and neutrons, which are our signal of interest, and electronic recoils such as gamma rays, which are a major source of background.

Xenon is also particularly good for low-background science because of its self-shielding properties. That is, because liquid xenon is so dense, gammas and neutrons tend to attenuate within just a few cm of entering the target. Any particle that does happen to be energetic enough to reach the center of the target has a high probability of undergoing multiple scatters, which are easy to pick out and reject in software. This makes xenon ideal not just for dark matter searches, but also for other rare event searches such as neutrinoless double-beta decay.

**The LUX Detector**

The LUX experiment is located nearly a mile underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. LUX rests on the 4850-foot level of the old Homestake gold mine, which was turned into a dedicated science facility in 2006.

Besides being a mining town and a center of Old West culture (The neighboring town, Deadwood, is famed as the location where Wild Bill Hickok met his demise in a poker game), Lead has a long legacy of physics. The same cavern where LUX resides once held Ray Davis’s famous solar neutrino experiment, which provided some of the first evidence for neutrino flavor oscillations and later won him the Nobel Prize.

The detector itself is what is called a two-phase time projection chamber (TPC). It essentially consists of a 370-kg xenon target in a large titanium can. This xenon is cooled down to its condensation point (~165 K), so that the bulk of the xenon target is liquid, and there is a thin layer of gaseous xenon on top. LUX has 122 photomultiplier tubes (PMTs) in two different arrays, one array on the bottom looking up into the main volume of the detector, and one array on the top looking down. Just inside those arrays are a set of parallel wire grids that supply an electric field throughout the detector. A gate grid located between the cathode and anode grid that lies close to the liquid surface allows the electric field in the liquid and gas regions to be separately tunable.

When an incident particle interacts with a xenon atom inside the target, it excites or ionizes the atom. In a mechanism common to all noble elements, that atom will briefly bond with another nearby xenon atom. The subsequent decay of this “dimer” back into its two constituent atoms causes a photon to be emitted in the UV. In LUX, this flash of scintillation light, called primary scintillation light or S1, is immediately detected by the PMTs. Next, any ionization charge that is produced is drifted upwards by a strong electric field (~200 V/cm) before it can recombine. This charge cloud, once it reaches the liquid surface, is pulled into the gas phase and accelerated very rapidly by an even stronger electric field (several kV/cm), causing a secondary flash of scintillation called S2, which is also detected by the PMTs. A typical signal read out from an event in LUX therefore consists of a PMT trace with two tell-tale pulses.

As in any rare event search, controlling the backgrounds is of utmost importance. LUX employs a number of techniques to do so. By situating the detector nearly a mile underground, we reduce cosmic muon flux by a factor of 10^{7}. Next, LUX is deployed into a 300-tonne water tank, which reduces gamma backgrounds by another factor of 10^{7} and neutrons by a factor of between 10^{3} and 10^{9}, depending on their energy. Third, by carefully choosing a fiducial volume in the center of the detector, i.e., by cutting out events that happen near the edge of the target, we can reduce background by another factor of 10^{4}. And finally, electronic recoils produce much more ionization than do the nuclear recoils that we are interested in, so by looking at the ratio S2/S1 we can achieve over 99% discrimination between gammas and potential WIMPs. All this taken into account, the estimated background for LUX is less than 1 WIMP-like event throughout 300 days of running, making it essentially a zero-background experiment. The center of LUX is in fact the quietest place in the world, radioactively speaking.

**Results From the First Science Run**

From April to August 2013, LUX ran continuously, collecting 85.3 livedays of WIMP search data with a 118-kg fiducial mass, resulting in over ten thousand kg-days of data. A total of 83 million events were collected. Of these, only 6.5 million were single scatter events. After applying fiducial cuts and cutting on the energy region of interest, only 160 events were left. All of these 160 events were consistent with electronic recoils. Not a single WIMP was seen – the WIMP remains as elusive as the unicorn that has become the unofficial LUX mascot.

So why is this exciting? The LUX limit is the lowest yet – it represents a factor of 2-3 increase in sensitivity over the previous best limit at high WIMP masses, and it is over 20 times more sensitive than the next best limit for low-mass WIMPs.

Over the past few years, experiments such as DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si have each reported signals that are consistent with WIMPs of mass 5-10 GeV/c^{2}. This is in direct conflict with the null results from ZEPLIN, COUPP, and XENON100, to name a few, and was the source of a fair amount of controversy in the direct detection community.

The LUX result was able to fairly definitively close the door on this question.

If the low-mass WIMPs favored by DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si do indeed exist, then statistically speaking LUX should have seen 1500 of them!

**What’s Next?**

Despite the conclusion of the 85-day science run, work on LUX carries on.

Just recently, there was a LUX talk presenting results from a calibration using low-mass neutrons as a proxy for WIMPs interacting within the detector, confirming the initial results from last autumn. Currently, LUX is gearing up for its next run, with the ultimate goal of collecting 300 livedays of WIMP-search data, which will extend the 2013 limit by a factor of five. And finally, a new detector called LZ is in the design stages, with a mass twenty times that of LUX and a sensitivity far greater.

***

For more details, the full LUX press release from October 2013 is located here:

I was very flattered to find myself on someone’s list of Top Ten 21st Century Science Non-Fiction Writers. (Unless they meant my evil twin. Grrr.) However, as flattered as I am — and as much as I want to celebrate … Continue reading

I was very flattered to find myself on someone’s list of Top Ten 21st Century Science Non-Fiction Writers. (Unless they meant my evil twin. Grrr.)

However, as flattered as I am — and as much as I want to celebrate rather than stomp on someone’s enthusiasm for reading about science — the list is on the wrong track. One way of seeing this is that there are no women on the list at all. That would be one thing if it were a list of Top Ten 19th Century Physicists or something — back in the day, the barriers of sexism were (even) higher than they are now, and women were systematically excluded from endeavors such as science with a ruthless efficiency. And such barriers are still around. But in science *writing*, here in the 21st century, the ladies are totally taking over, and creating an all-dudes list of this form is pretty blatantly wrong.

I would love to propose a counter-list, but there’s something inherently subjective and unsatisfying about ranking people. So instead, I hereby offer this:

**List of Ten or More Twenty-First Century Science Communicators of Various Forms Who Are Really Good, All of Whom Happen to be Women, Pulled Randomly From My Twitter Feed and Presented in No Particular Order**.

- Mary Roach. You will never laugh so hard reading about science.
- Annalee Newitz. io9 is one of the best blogs out there.
- Laura Hellmuth. Makes the science happen at
*Slate*. - Maryn McKenna. Bugs! And, now, food.
- Gia Mora. Singing about science totally counts.
- Sabine Hossenfelder. Blogging counts, too.
- Amy Harmon. The impact of science and technology on life.
- Lisa Randall. One of the world’s best physicists and most popular physics writers.
- Marie-Claire Shanahan. Science, gender, music.
- Rose Eveleth. Science and storytelling do mix.
- Alexandra Witze. Physics. And volcanos!
- Natalie Angier. She wrote the book on Woman.
- Elise Andrew. She loves science … a lot.
- Heather Berlin. Sadly thinks free will is an illusion.
- Amanda Gefter. Secrets of the universe.
- Maggie Ryan Sandford. Science as culture.
- Janna Levin. With whom I used to do problem sets in quantum field theory.
- Virginia Hughes. Someone has to love the microglia.
- A.V. Flox. The science/sex connection.
- Scirens. Group entry! Science on the screen.
- Janet Stemwedel. The ethics of science.
- Ann Finkbeiner. The last word on nothing.
- Elizabeth Landau. From the Worldwide Leader in News.
- Natalie Wolchover. Covering the hard physics at Quanta.
- Deborah Blum. Poison! And the occasional Pulitzer.
- Ray Burks. Be nice to chemists, they know things.
- Maia Szalavitz. Neuroscience to empathy and back.
- Florence Williams. Last year’s LA Book Festival prizewinner.
- Maggie Koerth-Baker. Covering the universe at Boing Boing.
- Faye Flam. Anyone with a cat named Higgs is okay in my book.
- Patricia Churchland. Nobody does neuroscience+philosophy better.
- Cara Santa Maria. Talking nerdy.
- Erin Biba. Physics and more at Wired.
- Holly Tucker. Blood!
- Rebecca Skloot. A deserving bestseller.
- Maria Konnikova. Explains how to think like Sherlock.
- Emily Willingham. Separating the true from the rest.
- Lisa Grossman. Explaining the skies.
- Valerie Jamieson. Injecting reality into New Scientist.
- KC Cole. One of my first favorite science writers.
- Sherry Turkle. Understanding our virtual world.
- Emily Anthes. Biotech is changing things.
- Margaret Wertheim. Crafting a new reality.
- Lauren Gunderson. Illuminating the drama inside science.
- Jennifer Ouellette. Would totally be on this list even if we weren’t married.

I’m sure it wouldn’t take someone else very long to come up with a list of female science communicators that was equally long and equally distinguished. Heck, I’m sure I could if I put a bit of thought into it. Heartfelt apologies for the many great people I left out.

Big Eyed Beans from Venus – Captain Beefheart and his Magic Band!

Comparable to Mars in effective temperature, bit larger than Earth, probably slightly more massive than Earth (mean density could be lower), atmosphere unknown.

Might well have extensive surface regions with persistent liquid water.