Planet Musings

April 18, 2014

Jordan EllenbergBooklist and Kirkus on How Not To Be Wrong

Some good pre-publication reviews are coming in!  From Kirkus:

Witty and expansive, Ellenberg’s math will leave readers informed, intrigued and armed with plenty of impressive conversation starters.

And Booklist (not available online, unfortunately:)

Relying on remarkably few technical formulas, Ellenberg writes with humor and verve as he repeatedly demonstrates that mathematics simply extends common sense. He manages to translate even the work of theoretical pioneers such as Cantor and Gödel into the language of intelligent amateurs. The surprises that await readers include not only a discovery of the astonishing versatility of mathematical thinking but also a realization of its very real limits. Mathematics, as it turns out, simply cannot resolve the real-world ambiguities surrounding the Bush-Gore cliff-hanger of 2000, nor can it resolve the much larger question of God’s existence. A bracing encounter with mathematics that matters.

 

 

 

 


April 17, 2014

Quantum DiariesSearching for Dark Matter With the Large Underground Xenon Experiment

In December, a result from the Large Underground Xenon (LUX) experiment was featured in Nature’s Year In Review as one of the most important scientific results of 2013. As a student who has spent the past four years working on this experiment I will do my best to provide an introduction to this experiment and hopefully answer the question: why all the hype over what turned out to be a null result?

The LUX detector, deployed into the water tank shield

The LUX detector, deployed into its water tank shield 4850 feet underground.

Direct Dark Matter Detection

Weakly Interacting Massive Particles (WIMPs), or particles that interact only through the weak nuclear force and gravity, are a particularly compelling solution to the dark matter problem because they arise naturally in many extensions to the Standard Model. Quantum Diaries did a wonderful series last summer on dark matter, located here, so I won’t get into too many details about dark matter or the WIMP “miracle”, but I would however like to spend a bit of time talking about direct dark matter detection.

The Earth experiences a dark matter “wind”, or flux of dark matter passing through it, due to our motion through the dark matter halo of our galaxy. Using standard models for the density and velocity distribution of the dark matter halo, we can calculate that there are nearly 1 billion WIMPs per square meter per second passing through the Earth. In order to match observed relic abundances in the universe, we expect these WIMPs to have a small yet measurable interaction cross-section with ordinary nuclei.

In other words, there must be a small-but-finite probability of an incoming WIMP scattering off a target in a laboratory in such a way that we can detect it. The goal of direct detection experiments is therefore to look for these scattering events. These events are characterized by recoil energies of a few to tens of keV, which is quite small, but it is large enough to produce an observable signal.

So here’s the challenge: How do you build an experiment that can measure an extremely small, extremely rare signal with very high precision amid large amounts of background?

Why Xenon?

The signal from a recoil event inside a direct detection target typically takes one of three forms: scintillation light, ionization of an atom inside the target, or heat energy (phonons). Most direct detection experiments focus on one (or two) of these channels.

Xenon is a natural choice for a direct detection medium because it is easy to read out signals from two of these channels. Energy deposited in the scintillation channel is easily detectable because xenon is transparent to its own characteristic 175-nm scintillation. Energy deposited in the ionization channel is likewise easily detectable, since ionization electrons under the influence of an applied electric field can drift through xenon for distances up to several meters. These electrons can then be read out by any one of a couple different charge readout schemes.

Furthermore, the ratio of the energy deposited in these two channels is a powerful tool for discriminating between nuclear recoils such as WIMPs and neutrons, which are our signal of interest, and electronic recoils such as gamma rays, which are a major source of background.

Xenon is also particularly good for low-background science because of its self-shielding properties. That is, because liquid xenon is so dense, gammas and neutrons tend to attenuate within just a few cm of entering the target. Any particle that does happen to be energetic enough to reach the center of the target has a high probability of undergoing multiple scatters, which are easy to pick out and reject in software. This makes xenon ideal not just for dark matter searches, but also for other rare event searches such as neutrinoless double-beta decay.

The LUX Detector

The LUX experiment is located nearly a mile underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota. LUX rests on the 4850-foot level of the old Homestake gold mine, which was turned into a dedicated science facility in 2006.

Besides being a mining town and a center of Old West culture (The neighboring town, Deadwood, is famed as the location where Wild Bill Hickok met his demise in a poker game), Lead has a long legacy of physics. The same cavern where LUX resides once held Ray Davis’s famous solar neutrino experiment, which provided some of the first evidence for neutrino flavor oscillations and later won him the Nobel Prize.

A schematic of the LUX detector.

A schematic of the LUX detector.

The detector itself is what is called a two-phase time projection chamber (TPC). It essentially consists of a 370-kg xenon target in a large titanium can. This xenon is cooled down to its condensation point (~165 K), so that the bulk of the xenon target is liquid, and there is a thin layer of gaseous xenon on top. LUX has 122 photomultiplier tubes (PMTs) in two different arrays, one array on the bottom looking up into the main volume of the detector, and one array on the top looking down. Just inside those arrays are a set of parallel wire grids that supply an electric field throughout the detector. A gate grid located between the cathode and anode grid that lies close to the liquid surface allows the electric field in the liquid and gas regions to be separately tunable.

When an incident particle interacts with a xenon atom inside the target, it excites or ionizes the atom. In a mechanism common to all noble elements, that atom will briefly bond with another nearby xenon atom. The subsequent decay of this “dimer” back into its two constituent atoms causes a photon to be emitted in the UV. In LUX, this flash of scintillation light, called primary scintillation light or S1, is immediately detected by the PMTs. Next, any ionization charge that is produced is drifted upwards by a strong electric field (~200 V/cm) before it can recombine. This charge cloud, once it reaches the liquid surface, is pulled into the gas phase and accelerated very rapidly by an even stronger electric field (several kV/cm), causing a secondary flash of scintillation called S2, which is also detected by the PMTs. A typical signal read out from an event in LUX therefore consists of a PMT trace with two tell-tale pulses. 

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

A typical event in LUX. The bottom plot shows the primary (S1) and secondary (S2) signals from each of the individual PMTs. The top two plots show the total size of the S1 and the S2 pulses.

As in any rare event search, controlling the backgrounds is of utmost importance. LUX employs a number of techniques to do so. By situating the detector nearly a mile underground, we reduce cosmic muon flux by a factor of 107. Next, LUX is deployed into a 300-tonne water tank, which reduces gamma backgrounds by another factor of 107 and neutrons by a factor of between 103 and 109, depending on their energy. Third, by carefully choosing a fiducial volume in the center of the detector, i.e., by cutting out events that happen near the edge of the target, we can reduce background by another factor of 104. And finally, electronic recoils produce much more ionization than do the nuclear recoils that we are interested in, so by looking at the ratio S2/S1 we can achieve over 99% discrimination between gammas and potential WIMPs. All this taken into account, the estimated background for LUX is less than 1 WIMP-like event throughout 300 days of running, making it essentially a zero-background experiment. The center of LUX is in fact the quietest place in the world, radioactively speaking.

Results From the First Science Run

From April to August 2013, LUX ran continuously, collecting 85.3 livedays of WIMP search data with a 118-kg fiducial mass, resulting in over ten thousand kg-days of data. A total of 83 million events were collected. Of these, only 6.5 million were single scatter events. After applying fiducial cuts and cutting on the energy region of interest, only 160 events were left. All of these 160 events were consistent with electronic recoils. Not a single WIMP was seen – the WIMP remains as elusive as the unicorn that has become the unofficial LUX mascot.

So why is this exciting? The LUX limit is the lowest yet – it represents a factor of 2-3 increase in sensitivity over the previous best limit at high WIMP masses, and it is over 20 times more sensitive than the next best limit for low-mass WIMPs.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

The 90% confidence upper limit on the spin independent WIMP-nucleon interaction cross section: LUX compared to previous experiments.

Over the past few years, experiments such as DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si have each reported signals that are consistent with WIMPs of mass 5-10 GeV/c2. This is in direct conflict with the null results from ZEPLIN, COUPP, and XENON100, to name a few, and was the source of a fair amount of controversy in the direct detection community.

The LUX result was able to fairly definitively close the door on this question.

If the low-mass WIMPs favored by DAMA/LIBRA, CoGeNT, CRESST, and CDMS-II Si do indeed exist, then statistically speaking LUX should have seen 1500 of them!

What’s Next?

Despite the conclusion of the 85-day science run, work on LUX carries on.

Just recently, there was a LUX talk presenting results from a calibration using low-mass neutrons as a proxy for WIMPs interacting within the detector, confirming the initial results from last autumn. Currently, LUX is gearing up for its next run, with the ultimate goal of collecting 300 livedays of WIMP-search data, which will extend the 2013 limit by a factor of five. And finally, a new detector called LZ is in the design stages, with a mass twenty times that of LUX and a sensitivity far greater.

***

For more details, the full LUX press release from October 2013 is located here:

http://www.youtube.com/watch?v=SMzAuhRFNQ0

Sean CarrollTwenty-First Century Science Writers

I was very flattered to find myself on someone’s list of Top Ten 21st Century Science Non-Fiction Writers. (Unless they meant my evil twin. Grrr.)

However, as flattered as I am — and as much as I want to celebrate rather than stomp on someone’s enthusiasm for reading about science — the list is on the wrong track. One way of seeing this is that there are no women on the list at all. That would be one thing if it were a list of Top Ten 19th Century Physicists or something — back in the day, the barriers of sexism were (even) higher than they are now, and women were systematically excluded from endeavors such as science with a ruthless efficiency. And such barriers are still around. But in science writing, here in the 21st century, the ladies are totally taking over, and creating an all-dudes list of this form is pretty blatantly wrong.

I would love to propose a counter-list, but there’s something inherently subjective and unsatisfying about ranking people. So instead, I hereby offer this:

List of Ten or More Twenty-First Century Science Communicators of Various Forms Who Are Really Good, All of Whom Happen to be Women, Pulled Randomly From My Twitter Feed and Presented in No Particular Order.

I’m sure it wouldn’t take someone else very long to come up with a list of female science communicators that was equally long and equally distinguished. Heck, I’m sure I could if I put a bit of thought into it. Heartfelt apologies for the many great people I left out.

Steinn SigurðssonDistant Cousins: Kepler-186f

Big Eyed Beans from Venus – Captain Beefheart and his Magic Band!

Cool. Literally

Comparable to Mars in effective temperature, bit larger than Earth, probably slightly more massive than Earth (mean density could be lower), atmosphere unknown.
Might well have extensive surface regions with persistent liquid water.

Kepler-186f

Kepler-186f comparison

John PreskillTalking quantum mechanics with second graders

“What’s the hardest problem you’ve ever solved?”

Kids focus right in. Driven by a ruthless curiosity, they ask questions from which adults often shy away. Which is great, if you think you know the answer to everything a 7 year-old can possibly ask you…

Two Wednesdays ago, I was invited to participate in three Q&A sessions that quickly turned into Reddit-style AMA (ask-me-anything) sessions over Skype with four 5th grade classes and one 2nd grade class of students at Medina Elementary in Medina, Washington. When asked by the organizers what I would like the sessions to focus on, I initially thought of introducing students to the mod I helped design for Minecraft, called QCraft, which brings concepts like quantum entanglement and quantum superposition into the world of Minecraft. But then I changed my mind. I told the organizers that I would talk about anything the kids wanted to know more about. It dawned on me that maybe not all 5th graders are as excited about quantum physics as I am. Yet.

The students took the bait. They peppered me with questions for over two hours —everything from “What is a quantum physicist and how do you become one?” to “What is it like to work with a fashion designer (about my collaboration with Project Runway’s Alicia Hardesty on Project X Squared)?” and of course, “Why did you steal the cannon?” (learn more about the infamous Cannon Heist - yes kids, there is an ongoing war between the two schools and Caltech took the last (hot) shot just days ago.)”

Caltech students visited MIT bearing some clever gifts.

Caltech students visited MIT during pre-frosh weekend, bearing some clever gifts.

Then they dug a little deeper: “If we have a quantum computer that knows the answer to everything, why do we need to go to school?” This question was a little tricky, so I framed the answer like this: I compared the computer to a sidekick, and the kids—the future scientists, artists and engineers —to superheroes. Sidekicks always look up to the superheroes for guidance and leadership. And then I got this question from a young girl: “If we are superheroes, what should we do with all this power?” I thought about it for a second and though my initial inclination was to go with: “You should make Angry Birds 3D!”, I went with this instead: “People often say, “Study hard so that one day you can cure cancer, figure out the theory of everything and save the world!” But I would rather see you all do things to understand the world. Sometimes you think you are saving the world when it does not need saving—it is just misunderstood. Find ways to understand one another and move to look for the value in others. Because there is always value in others, often hiding from us behind powerful emotions.” The kids listened in silence and, in that moment, I felt profoundly connected with them and their teachers.

I wasn’t expecting any more “deep” questions, until another young girl raised her hand and asked: “Can I be a quantum physicist, or is it only for the boys?” The ferocity of my answer caught me by surprise: “Of course you can! You can do anything you set your mind to and anyone who tells you otherwise, be it your teachers, your friends or even your parents, they are just wrong! In fact, you have the potential to leave all the boys in the class behind!” The applause and laughter from all the girls sounded even louder among the thunderous silence from the boys. Which is when I realized my mistake and added: “You boys can be superheroes too! Just make sure not to underestimate the girls. For your own sake.

Why did I feel so strongly about this issue of women in science? Caltech has a notoriously bad reputation when it comes to the representation of women among our faculty and postdocs (graduate students too?) in areas such as Physics and Mathematics. IQIM has over a dozen male faculty members in its roster and only one woman: Prof. Nai-Chang Yeh. Anyone who meets Prof. Yeh quickly realizes that she is an intellectual powerhouse with boundless energy split among her research, her many students and requests for talks, conference organization and mentoring. Which is why, invariably, every one of the faculty members at IQIM feels really strongly about finding a balance and creating a more inclusive environment for women in science. This is a complex issue that requires a lot of introspection and creative ideas from all sides over the long term, but in the meantime, I just really wanted to tell the girls that I was counting on them to help with understanding our world, as much as I was counting on the boys. Quantum mechanics? They got it. Abstract math? No problem.*

It was of course inevitable that they would want to know why we created the Minecraft mod, a collaborative work between Google, MinecraftEDU and IQIM – after all, when I asked them if they had played Minecraft before, all hands shot up. Both IQIM and Google think it is important to educate younger generations about quantum computers and the complex ideas behind quantum physics; and more importantly, to meet kids where they play, in this case, inside the Minecraft game. I explained to the kids that the game was a place where they could experiment with concepts from quantum mechanics and that we were developing other resources to make sure they had a place to go to if they wanted to know more (see our animations with Jorge Cham at http://phdcomics.com/quantum).

As for the hardest problem I have ever solved? I described it in my first blog post here, An Intellectual Tornado. The kids sat listening in some sort of trance as I described the nearly perilous journey through the lands of “agony” and “self doubt” and into the valley of “grace”, the place one reaches when they learn to walk next to their worst fears, as understanding replaces fear and respect for a far superior opponent teaches true humility and instills in you a sense of adventure. By that time, I thought I was in the clear – as far as fielding difficult questions from 10 year-olds goes – but one little devil decided to ask me this simple question: “Can you explain in 2 minutes what quantum physics is?” Sure! You see kids, emptiness, what we call the quantum vacuum, underlies the emergence of spacetime through the build-up of correlations between disjoint degrees of freedom, we like to call entangled subsystems. The uniqueness of the Schmidt decomposition over generic quantum states, coupled with concentration of measure estimates over unequal bipartite decompositions gives rise to Schrodinger’s evolution and the concept of unitarity – which itself only emerges in the thermodynamic limit. In the remaining minute, let’s discuss the different interpretations of the following postulates of quantum mechanics: Let’s start with measurements…

Reaching out to elementary school kids is just one way we can make science come alive, and many of us here at IQIM look forward to sharing with kids of any age our love for adventuring far and wide to understand the world around us. In case you are an expert in anything, or just passionate about something, I highly recommend engaging the next generation through visits to classrooms and Skype sessions across state lines. Because, sometimes, you get something like this from their teacher:

Hello Dr. Michalakis,

My class was lucky enough to be able to participate in one of the Skype chats you did with Medina Elementary this morning. My students returned to the classroom with so many questions, wonderings, concerns, and ideas that we could spend the remainder of the year discussing them all.

Your ability to thoughtfully answer EVERY single question posed to you was amazing. I was so impressed and inspired by your responses that I am tempted to actually spend the remainder of the year discussing quantum mechanics J.

I particularly appreciated your point that our efforts should focus on trying to “understand the world” rather than “save” the world. I work each day to try and inspire curiosity and wonder in my students. You accomplished more towards my goal in about 40 minutes than I probably have all year. For that I am grateful.

All the best,
A.T.

* Several of my female classmates at MIT (where I did my undergraduate degree in Math with Computer Science) had a clarity of thought and a sense of perseverance that Seal Team Six would be envious of. So I would go to them for help with my hardest homework.


BackreactionThe Problem of Now

[Image Source]

Einstein’s greatest blunder wasn’t the cosmological constant, and neither was it his conviction that god doesn’t throw dice. No, his greatest blunder was to speak to a philosopher named Carnap about the Now, with a capital.

“The problem of Now”, Carnap wrote in 1963, “worried Einstein seriously. He explained that the experience of the Now means something special for men, something different from the past and the future, but that this important difference does not and cannot occur within physics”

I call it Einstein’s greatest blunder because, unlike the cosmological constant and indeterminism, philosophers, and some physicists too, are still confused about this alleged “Problem of Now”.

The problem is often presented like this. Most of us experience a present moment, which is a special moment in time, unlike the past and unlike the future. If you write down the equations governing the motion of some particle through space, then this particle is described, mathematically, by a function. In the simplest case this is a curve in space-time, meaning the function is a map from the real numbers to a four-dimensional manifold. The particle changes its location with time. But regardless of whether you use an external definition of time (some coordinate system) or an internal definition (such as the length of the curve), every single instant on that curve is just some point in space-time. Which one, then, is “now”?

You could argue rightfully that as long as there’s just one particle moving on a straight line, nothing is happening, and so it’s not very surprising that no notion of change appears in the mathematical description. If the particle would scatter on some other particle, or take a sudden turn, then these instances can be identified as events in space-time. Alas, that still doesn’t tell you whether they happen to the particle “now” or at some other time.

Now what?

The cause for this problem is often assigned to the timeless-ness of mathematics itself. Mathematics deals in its core with truth values and the very point of using math to describe nature is that these truths do not change. Lee Smolin has written a whole book about the problem with the timeless math, you can read my review here.

It may or may not be that mathematics is able to describe all of our reality, but to solve the problem of now, excuse the heresy, you do not need to abandon a mathematical description of physical law. All you have to do is realize that the human experience of now is subjective. It can perfectly well be described by math, it’s just that humans are not elementary particles.

The decisive ability that allows us to experience the present moment as being unlike other moments is that we have a memory. We have a memory of events in the past, an imperfect one, and we do not have memory of events in the future. Memory is not in and by itself tied to consciousness, it is tied to the increase of entropy, or the arrow of time if you wish. Many materials show memory; every system with a path dependence like eg hysteresis does. If you get a perm the molecule chains in your hair remember the bonds, not your brain.

Memory has nothing to do with consciousness in particular which is good because it makes it much easier to find the flaw in the argument leading to the problem of now.

If we want to describe systems with memory we need at the very least two time parameters: t to parameterize the location of the particle and τ to parameterize the strength of memory of other times depending on its present location. This means there is a function f(t,τ) that encodes how strong is the memory of time τ at moment t. You need, in other words, at the very least a two-point function, a plain particle trajectory will not do.

That we experience a “now” means that the strength of memory peaks when both time parameters are identical, ie t-τ = 0. That we do not have any memory of the future means that the function vanishes when τ > t. For the past it must decay somehow, but the details don’t matter. This construction is already sufficient to explain why we have the subjective experience of the present moment being special. And it wasn’t that difficult, was it?

The origin of the problem is not in the mathematics, but in the failure to distinguish subjective experience of physical existence from objective truth. Einstein spoke about “the experience of the Now [that] means something special for men”. Yes, it means something special for men. This does not mean however, and does not necessitate, that there is a present moment which is objectively special in the mathematical description. In the above construction all moments are special in the same way, but in every moment that very moment is perceived as special. This is perfectly compatible with both our experience and the block universe of general relativity. So Einstein should not have worried.

I have a more detailed explanation of this argument – including a cartoon! – in a post from 2008. I was reminded of this now because Mermin had a comment in the recent issue of Nature magazine about the problem of now.

In his piece, Mermin elaborates on qbism, a subjective interpretation of quantum mechanics. I was destined to dislike this just because it’s a waste of time and paper to write about non-existent problems. Amazingly however, Mermin uses the subjectiveness of qbism to arrive at the right conclusion, namely that the problem of the now does not exist because our experiences are by its very nature subjective. However, he fails to point out that you don’t need to buy into fancy interpretations of quantum mechanics for this. All you have to do is watch your hair recall sulphur bonds.

The summary, please forgive me, is that Einstein was wrong and Mermin is right, but for the wrong reaons. It is possible to describe the human experience of the present moment with the “timeless” mathematics that we presently use for physical laws, it isn’t even difficult and you don’t have to give up the standard interpretation of quantum mechanics for this. There is no problem of Now and there is no problem with Tegmark’s mathematical universe either.

And Lee Smolin, well, he is neither wrong nor right, he just has a shaky motivation for his cosmological philosophy. It is correct, as he argues, that mathematics doesn’t objectively describe a present moment. However, it’s a non sequitur that the current approach to physics has reached its limits because this timeless math doesn’t constitute a conflict with our experience. observation.

Most people get a general feeling of uneasiness when they first realize that the block universe implies all the past and all the future is equally real as the present moment, that even though we experience the present moment as special, it is only subjectively so. But if you can combat your uneasiness for long enough, you might come to see the beauty in eternal mathematical truths that transcend the passage of time. We always have been, and always will be, children of the universe.

Chad OrzelString Experiment: Capillary Action is Complicated

As I’ve mentioned here before, I do a lot of work these days in my local Starbucks. This is slightly ironic, as I don’t like coffee– instead, I order tea, which I put in an insulated travel mug. I tend to get the tea, carry the mug back to the table, and let it steep while I boot up the laptop, then pull the teabags out. I get a hot water refill after I finish the first mug, and take it over to campus if I’m going in that day, and those generally carries me through the morning.

At some point, I noticed that when I had the cap on, I tended to end up with a small puddle of liquid next to the cup, either on the table, or in the cupholder of my car, dripping off the strings of the teabags. This seemed to be more pronounced when the cap was on– if I screw the cap down, I generally find drips on the table by the time the laptop is ready, but if I just carefully carry the mug with the cap off, I don’t. This struck me as odd, but on thinking about it, it makes a certain amount of sense in terms of capillary action.

Capillary action, as you may or may not know without reading that Wikipedia link, is the phenomenon where water will “climb” up a narrow tube inserted into it. One of the curious things about this is that the water will spread farther as the tube gets narrower. My tea was escaping the mug through the wicking action of the teabag string, which is basically a capillary phenomenon– water gets pulled into small spaces between the threads, and moves up the string. It’s conceivable, then, that the cap could enhance this process, by compressing the string and making those spaces even smaller, and increasing the capillary action.

(I have prior experience of this, from an undergraduate research project nigh on 25 years ago. We were trying to look at a Kerr effect in a liquid sample, to see a rotation of polarization that was quadratic in the applied electric field. Doing this required a fairly large electric field, so I made a sample cell that was two metal electrodes with about a millimeter between them, and filled the gap with the liquid. Then I glued a microscope slide on the top of the electrodes to seal it up so the cell wouldn’t leak, whereupon all of the liquid between the electrodes sucked up into the tiny gap between the slide and the metal, leaving nothing in the larger gap where we wanted to see the effect…)

So, this seemed like a phenomenon in need of further investigation. Which I did using the highly sophisticated apparatus you see above, consisting of my travel mug, a teabag string, and a small graduated cylinder (10ml capacity) borrowed from my lab at work. I filled the mug with hot tap water with one end of the string inside, and the other end in the top of the graduated cylinder. Then I waited a while, and measured how much liquid dripped into the cylinder.

In keeping with good scientific practice, the duration of each trial was set by whatever else I needed to do– answering a bunch of email, walking the dog, eating dinner– so for a fair comparison, I divided the amount of liquid wicked out of the mug (which varied from 0.4ml to 1.8ml) by the length of the trial (which varied from 27 minutes to 64 minutes) to get a flow rate in milliliters per minute. Then, I made a graph, because that’s what I do:

Flow rate through teabag string for various trials.

Flow rate through teabag string for various trials.

The black circles are with the cap on, the red with the cap off, and the green triangle is a case where I replaced the cap after the second cap-off trial. There’s a lot of variability, here, but if I average together the cap-on data points I get a flow rate of 0.036&pm0.013 ml/min (0.030&pm0.011 ml/min if I include the replaced-cap point), and the cap-off points have an average of 0.013&pm0.007 ml/min. It’s not a hugely significant difference, but it is consistent with my anecdotal impression.

There are a bunch of caveats to this, though, chiefly that this is hugely material-dependent. In fact, it took a while to get this to work at all, because I first tried it with kitchen string (the stuff you tie meat with for roasting purposes), which produced no flow at all. that’s much thicker, and may be coated with something to prevent it absorbing too much juice during cooking, but strings from the Twinings tea bags I use at home didn’t produce anything useful, either. I ended up keeping the strings from several days’ worth of Starbucks trips and using those for the above tests. Each of the data points is a new string (so it started out dry); the scatter in points may reflect differences between strings.

And just for fun, I also repeated this with two narrow strips of paper towel, a material specifically designed for its wicking properties. Adding these to the graph gives:

Flow rate for both teabag string and strips of paper towel.

Flow rate for both teabag string and strips of paper towel.

You can see that, as you might well expect, the flow rate is dramatically higher for the towel than the string. Also, the effect of the cap is opposite– the cap-off case produced a higher flow rate by around a factor of 3. Constriction was an obstacle to the wicking behavior there, which it clearly was not for the string.

So, what have we learned from this, other than “Chad is an enormous nerd?” (which we knew already…) Well, basically, that capillary action is complicated. To really do this right would require a lot more fiddling around, and more careful control of the parameters, investigation of more fibers, etc. Which could be fun, but seems like a lot of work, so I’ll just leave it here.

April 16, 2014

Mark Chu-CarrollA Recipe for Gefilte Fish

My mom died last friday night, a little bit after midnight. I’ll probably write something about her, when I’m feeling up to it, but not yet. Being a jewish dad, when I’m depressed, what do I do? I cook.

Last year, for the first time ever, I made homemade gefilte fish for Pesach. If you didn’t grow up Jewish, odds are you’re not familiar with gefilte fish. It’s a traditional ashkenazi (that is, eastern european jewish) dish. It was originally a ground fish mixture cooked inside the body cavity of a fish. It evolved into just the stuffing mixture, simmered in a fish stock. These days, most people just buy it in a jar. If you grew up with it, even out of the jar, it’s a treat; if you didn’t, and you’ve been exposed to it, it looks and smells like dog food.

Honestly, I love the stuff. In general, I’m not a big fan of most traditional Jewish foods. But there’s something about gefilte fish. But even as I enjoy it, I can see the gross side. It’s crazy overprocessed – I mean, come on – it’s fish that will keep, unrefrigerated, for years!

But made fresh, it’s a whole different kettle of fish. This stuff is really good. You’ll definitely recognize the flavor of this as gefilte fish, but it’s a much cleaner flavor. It tastes like fish, not like stale overprocessed fish guts.

So this year, I’m depressed over my mom; after the funeral, I sent my wife out to buy me a bunch of fish, and I made up a batch. This time, I kept notes on how I did it – and it turned out even better than last year.

It’s got a bit of a twist in the recipe. I’m married to a chinese woman, so when the Jewish holidays roll around, I always try to find some way of putting an asian spin on the food, to reflect the nature of our family. So when I cooked the gefilte fish, instead of cooking it in the traditional simple fish broth, I cooked it in dashi. It’s not chinese, but it’s got a lot of flavors that are homey for a chinese person.

So… here’s the recipe for Mark’s homemade salmon dashi gefilte fish!

Ingredients

  • 2 whole pike, gutted and cleaned, but with skin, head, and bones
  • 2 whole red snapper, gutted and cleaned, but with skin, head and bones
  • 2 pounds salmon filet
  • 3/4 to 1 cup matzoh meal
  • 3 eggs
  • salt (to taste)
  • 2 sheets of konbu (japanese dried kelp)
  • 2 handfulls dried shaved bonito
  • 4 or 5 slices of fresh ginger, crushed
  • 2 onions
  • 2 large carrots

(For the fish for this, you really want the bones, the skins, and the head. If you’ve got a fish market that will fillet it for you, and then give you all of the parts, have them do that. Otherwise, do it yourself. Don’t worry about how well you can fillet it – it’s going to get ground up, so if you do a messy job, it’s not a problem.)

Instructions

  1. First thing, you need to make the stock that you’ll eventually cook the gefilte fish in:
    1. If the fish store didn’t fillet the fish for you, you need to remove the filets from the fish, and then remove the skin from the filets.
    2. Put all of the bones, skin, and head into a stock pot.
    3. Cover the fish bones with with water.
    4. Add one onion, and all of the garlic and ginger to the pot.
    5. Heat to a boil, and then simmer for two hours.
    6. Strain out all of the bones, and put the stock back into the pot and bring to a boil.
    7. Add the kombu to the stock, and let it simmer for 30 minutes.
    8. Remove from the heat, and strain out the kombu.
    9. Add the bonito (off the heat), and let it sit for 15 minutes.
    10. Strain out the bonito and any remaining solids.
    11. Add salt to taste.
  2. While the stock is simmering, you can get started on the fish:
    1. Cut all of the fish into chunks, and put them through a meat grinder with a coarse blade (or grind them coarsely in batches in a food processor.)
    2. Cut the onion and carrots into chunks, and put them through the grinder as well.
    3. Beat the eggs. Fold the eggs and the salt into the ground fish mixture.
    4. Add in maztoh meal gradually, until the mixture holds together.
    5. Refrigerate for two hours.
  3. Now you’re ready to cook the gefilte fish!
    1. Heat the stock up to a gentle simmer.
    2. Scoop up the fish into balls containing about two tablespoons of the fish mixture, and roll them into balls.
    3. Add the fish balls into the simmering stock. Don’t overcrowd the pot – add no more than can fit into the pot in a single layer.
    4. Simmer for 10-15 minutes until the fish balls are cooked through.
    5. Remove the balls from the simmering liquid. Repeat until all of the fish is cooked.
    6. Put all the cooked fish balls back into the stock, and refrigerate.

n-Category Café Enrichment and the Legendre-Fenchel Transform I

The Legendre-Fenchel transform, or Fenchel transform, or convex conjugation, is, in its naivest form, a duality between convex functions on a vector space and convex functions on the dual space. It is of central importance in convex optimization theory and in physics it is used to switch between Hamiltonian and Lagrangian perspectives.

graphs

Suppose that VV is a real vector space and that f:V[,+] f\colon V\to [-\infty ,+\infty ] is a function then the Fenchel transform is the function f *:V #[,+] f^{\ast }\colon V^{#}\to [-\infty ,+\infty ] defined on the dual vector space V #V^{#} by f *(k)sup xV{k,xf(x)}. f^{\ast }(k)\coloneqq \sup _{x\in V}\big \{ \langle k,x\rangle -f(x)\big \} .

If you’re a regular reader then you will be unsurprised when I say that I want to show how it naturally arises from enriched category theory constructions. I’ll show that in the next post. In this post I’ll give a little introduction to the Legendre-Fenchel transform.

There is probably no best way to introduce the Legendre-Fenchel transform. The only treatment that I knew for many years was in Arnold’s book Mathematical Methods of Classical Mechanics, but I have recently come across the convex optimization literature and would highly recommend Tourchette’s The Fenchel Transform in a Nutshell — my treatment here is heavily influenced by this paper. I will talk mainly about the one-dimensional case as I think that gives a lot of the intuition.

We will start, as Legendre did, with the special case of a strictly convex differentiable function f:f\colon \mathbb{R}\to \mathbb{R}; for instance, the function x 2+1/2x^{2}+1/2 pictured on the left hand side above. The derviative of ff is strictly increasing and so the function ff can be parametrized by the derivative k=df/dxk =d f/d x instead of the parameter xx. Indeed we can write the parameter xx in terms of the slope kk, x=x(k)x=x(k). The Legendre-Fenchel transform f *f^{*} can then be defined to satisfy k,x=f(x)+f *(k), \langle k,x \rangle = f(x) +f^{\ast }(k), where the angle brackets mean the pairing between a vector space and its dual. In this one-dimensional case, where xx and kk are thought of as real numbers, we just have k,x=kx\langle k,x \rangle = k x.

As xx is a function of kk we can rewrite this as f *(k)k,x(k)f(x(k)). f^{\ast }(k)\coloneqq \langle k,x(k) \rangle -f(x(k)). So the Legendre-Fenchel transform encodes the function is a different way. By differentiating this equation you can see that the df */dk=x(k)d f^{\ast }/d k=x(k), thus we have interchanged the abcissa (the horizontal co-ordinate) and the slope. So if ff has derivative k 0k_{0} at x 0x_{0} then f *f^{\ast } has derivative x 0x_{0} at k 0k_{0}. This is illustrated in the above picture.

I believe this is what Legendre did and then that what Fenchel did was to generalize this to non-differentiable functions.

For non-differentiable functions, we can’t talk about tangent lines and derivatives, but instead can talk about supporting lines. A supporting line is one which touches the graph at at least one point and never goes above the graph. (The fact that we’re singling out lines not going above the graph means that we have convex functions in mind.)

For instance, at the point (x 0,f(x 0))(x_{0},f(x_{0})) the graph pictured below has no tangent line, but has supporting lines with gradient from k 1k_{1} to k 2k_{2}. A convex function will have at least one supporting line at each point.

graphs

It transpires that the right way to generalize the transform to this non-differentiable case is to define it as follows: f *(k)sup x{k,xf(x)}. f^{\ast }(k)\coloneqq \sup _{x\in \mathbb{R}}\big \{ \langle k,x\rangle -f(x)\big \} . In this case, if ff has a supporting line of slope k 0k_{0} at x 0x_{0} then f *f^{\ast } has a supporting line of slope x 0x_{0} at k 0k_{0}. In the picture above, at x 0x_{0}, the function ff has supporting lines with slope from k 1k_{1} to k 2k_{2}: correspondingly, the function f *f^{\ast } has supporting lines with slope x 0x_{0} all the way from k 1k_{1} to k 2k_{2}.

If we allow the function ff to be not strictly convex then the transform will not alway be finite. For example, if f(x)ax+bf(x)\coloneqq ax+b then we have f *(a)=bf^{\ast }(a)=-b and f *(k)=+f^{\ast }(k)=+\infty for kak\ne a. So we will allow functions taking values in the extended real numbers: ¯[,+]\overline{\mathbb{R}}\coloneqq [-\infty ,+\infty ].

We can use the above definition to get the transform of any function f:¯f\colon \mathbb{R}\to \overline{\mathbb{R}}, whether convex or not, but the resulting transform f *f^{\ast } is always convex. (When there are infinite values involved we can also say that f *f^{\ast } is lower semi-continuous, but I’ll absorb that into my definition of convex for functions taking infinite values.)

Everything we’ve done for one-dimensional \mathbb{R} easily generalizes to any finite dimensional real vector space VV, where we should say ‘supporting hyperplane’ instead of ‘supporting line’. From that we get a transform between sets of functions (--) *:Fun(V,¯)Fun(V #,¯), (\text {--})^{\ast }\colon \mathrm{Fun}(V,\overline{\mathbb{R}})\to \mathrm{Fun}(V^{#},\overline{\mathbb{R}}), where V #V^{#} is the vector space dual of VV. Similarly, we have a reverse transform going the other way, which is traditionally also denoted with a star (--) *:Fun(V #,¯)Fun(V,¯), (\text {--})^{\ast }\colon \mathrm{Fun}(V^{#},\overline{\mathbb{R}})\to \mathrm{Fun}(V,\overline{\mathbb{R}}), for g:V #¯g\colon V^{#}\to \overline{\mathbb{R}} we define g *(x)sup kV #{k,xg(k)}. g^{\ast }(x)\coloneqq \sup _{k\in V^{#}}\big \{ \langle k,x\rangle -g(k)\big \} .

This pair of transforms have some rather nice properties, for instance, they are order reversing. We can put a partial order on any set of functions to ¯\overline{\mathbb{R}} by defining f 1f 2f_{1}\ge f_{2} if f 1(x)f 2(x)f_{1}(x)\ge f_{2}(x) for all xx. Then f 1f 2f 2 *f 1 *. f_{1}\ge f_{2} \quad \Rightarrow \quad f_{2}^{\ast }\ge f_{1}^{\ast }. Also for any function ff we have f *=f *** f^{\ast }=f^{\ast \ast \ast } which implies that the operator ff **f\mapsto f^{\ast \ast } is idempotent: f **=f ****. f^{\ast \ast }=f^{\ast \ast \ast \ast }. This means that ff **f\mapsto f^{\ast \ast } is a closure operation. What it actually does is take the convex envelope of ff, which is the largest convex function less than or equal to ff. Here’s an example.

graphs

This gives that if ff is already a convex function then f **=ff^{\ast \ast }=f. And as a consequence the Legendre-Fenchel transform and the reverse transform restrict to an order reversing bijection between convex functions on VV and convex functions on its dual V #V^{#}. Cvx(V,¯)Cvx(V #,¯). \mathrm{Cvx}(V,\overline{\mathbb{R}})\cong \mathrm{Cvx}(V^{#},\overline{\mathbb{R}}).

There are many other things that can be said about the transform, such as Fenchel duality and the role it plays in optimization, but I don’t understand such things to my own satisfaction yet.

Next time I’ll explain how most of the above structure drops out of the nucleus construction in enriched category theory.

Chad OrzelUncertain Dots, Episode 12

The last couple of days have been ridiculously hectic, but Rhett and I did manage to record another episode of Uncertain Dots, our twelfth:

This time out, we talk about labs, undergrad research, kids doing chores, weather, student course evaluations, and I didn’t really rant about superheroes. Relevant to the weather thing, I offer the “featured image” up top, showing last night’s snow at Chateau Steelypips. Spring in New England, baby!

BackreactionBook review: “The Theoretical Minimum – Quantum Mechanics” By Susskind and Friedman

Quantum Mechanics: The Theoretical Minimum
What You Need to Know to Start Doing Physics
By Leonard Susskind, Art Friedman
Basic Books (February 25, 2014)

This book is the second volume in a series that we can expect to be continued. The first part covered Classical Mechanics. You can read my review here.

The volume on quantum mechanics seems to have come into being much like the first, Leonard Susskind teamed up with Art Friedman, a data consultant whose role I envision being to say “Wait, wait, wait” whenever the professor’s pace gets too fast. The result is an introduction to quantum mechanics like I haven’t seen before.

The ‘Theoretical Minimum’ focuses, as its name promises, on the absolute minimum and aims at being accessible with no previous knowledge other than the first volume. The necessary math is provided along the way in separate interludes that can be skipped. The book begins with explaining state vectors and operators, the bra-ket notation, then moves on to measurements, entanglement and time-evolution. It uses the concrete example of spin-states and works its way up to Bell’s theorem, which however isn’t explicitly derived, just captured verbally. However, everybody who has made it through Susskind’s book should be able to then understand Bell’s theorem. It is only in the last chapters that the general wave-function for particles and the Schrödinger equation make an appearance. The uncertainty principle is derived and path integrals are very briefly introduced. The book ends with a discussion of the harmonic oscillator, clearly building up towards quantum field theory there.

I find the approach to quantum mechanics in this book valuable for several reasons. First, it gives a prominent role to entanglement and density matrices, pure and mixed states, Alice and Bob and traces over subspaces. The book thus provides you with the ‘minimal’ equipment you need to understand what all the fuzz with quantum optics, quantum computing, and black hole evaporation is about. Second, it doesn’t dismiss philosophical questions about the interpretation of quantum mechanics but also doesn’t give these very prominent space. They are acknowledged, but then it gets back to the physics. Third, the book is very careful in pointing out common misunderstandings or alternative notations, thus preventing much potential confusion.

The decision to go from classical mechanics straight to quantum mechanics has its disadvantages though. Normally the student encounters Electrodynamics and Special Relativity in between, but if you want to read Susskind’s lectures as self-contained introductions, the author now doesn’t have much to work with. This time-ordering problem means that every once in a while a reference to Electrodynamics or Special Relativity is bound to confuse the reader who really doesn’t know anything besides this lecture series.

It also must be said that the book, due to its emphasis on minimalism, will strike some readers as entirely disconnected from history and experiment. Not even the double-slit, the ultraviolet catastrophe, the hydrogen atom or the photoelectric effect made it into the book. This might not be for everybody. Again however, if you’ve made it through the book you are then in a good position to read up on these topics elsewhere. My only real complaint is that Ehrenfest’s name doesn’t appear together with his theorem.

The book isn’t written like your typical textbook. It has fairly long passages that offer a lot of explanation around the equations, and the chapters are introduced with brief dialogues between fictitious characters. I don’t find these dialogues particularly witty, but at least the humor isn’t as nauseating as that in Goldberg’s book.

All together, the “Theoretical Minimum” achieves what it promises. If you want to make the step from popular science literature to textbooks and the general scientific literature, then this book series is a must-read. If you can’t make your way through abstract mathematical discussions and prefer a close connection to example and history, you might however find it hard to get through this book.

I am certainly looking forward to the next volume.

(Disclaimer: Free review copy.)

Doug NatelsonRecurring themes in (condensed matter/nano) physics: spatial periodicity

A defining characteristic of crystalline solids is that their constituent atoms are arranged in a spatially periodic way.  In fancy lingo, the atomic configuration breaks continuous translational and rotational invariance (that is, it picks out certain positions and orientations in space from an infinite variety of possible choices), but preserves discrete translational invariance (and other possible symmetries).  

The introduction of a characteristic spatial length scale, or equivalently a spatial frequency, is a big deal, because when other spatial length scales in the physical system coincide with that one, there can be big consequences.  For example, when the wavelength of x-rays or electrons or neutrons is some integer harmonic of the (projected) lattice spacing, then waves scattered from subsequent (or every second or every third, etc.) plane of atoms will interfere constructively - this is called the Bragg condition, is what gives diffraction patterns that have proven so useful in characterizing material structures.  Another way to think about this:  The spatial periodicity of the lattice is what forces the momentum of scattered x-rays (or electrons or neutrons) to change only by specified amounts.

It gets better.  When the wavelength of electrons bound in a crystalline solid corresponds to some integer multiple of the lattice spacing, this implies that the electrons strongly "feel" any interaction with the lattice atoms - in the nearly-free-electron picture, this matching of spatial frequencies is what opens up band gaps at particular wavevectors (and hence energies).  Similar physics happens with lattice vibrations.  Similar physics happens when we consider electromagnetic waves in spatially periodic dielectrics.  Similar physics happens when looking at electrons in a "superlattice" made by layering different semiconductors or a periodic modulation of surface relief.

One other important point.  The idea of a true spatial periodicity really only applies to infinitely large periodic systems.  If discrete translational invariance is broken (by a defect, or an interface), then some of the rules "enforced" by the periodicity can be evaded.  For example, momentum changes forbidden for elastic scattering in a perfect infinite crystal can take place at some rate at interfaces or in defective crystals.  Similarly, the optical selection rules that must be rigidly applied in perfect crystals can be bent a bit in nanocrystals, where lattice periodicity is not infinite.

Commensurate spatial periodicities between wave-like entities and lattices are responsible for electronic and optical bandgaps, phonon dispersion relations, x-ray/electron/neutron crystallography, (crystal) momentum conservation and its violation in defective and nanoscale structures, and optical selection rules and their violations in crystalline solids.  Rather far reaching consequences!

April 15, 2014

Quantum DiariesTen things you might not know about particle accelerators

A version of this article appeared in symmetry on April 14, 2014.

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

From accelerators unexpectedly beneath your feet to a ferret that once cleaned accelerator components, symmetry shares some lesser-known facts about particle accelerators. Image: Sandbox Studio, Chicago

The Large Hadron Collider at CERN laboratory has made its way into popular culture: Comedian John Stewart jokes about it on The Daily Show, character Sheldon Cooper dreams about it on The Big Bang Theory and fictional villains steal fictional antimatter from it in Angels & Demons.

Despite their uptick in popularity, particle accelerators still have secrets to share. With input from scientists at laboratories and institutions worldwide, symmetry has compiled a list of 10 things you might not know about particle accelerators.

There are more than 30,000 accelerators in operation around the world.

Accelerators are all over the place, doing a variety of jobs. They may be best known for their role in particle physics research, but their other talents include: creating tumor-destroying beams to fight cancer; killing bacteria to prevent food-borne illnesses; developing better materials to produce more effective diapers and shrink wrap; and helping scientists improve fuel injection to make more efficient vehicles.

One of the longest modern buildings in the world was built for a particle accelerator.

Linear accelerators, or linacs for short, are designed to hurl a beam of particles in a straight line. In general, the longer the linac, the more powerful the particle punch. The linear accelerator at SLAC National Accelerator Laboratory, near San Francisco, is the largest on the planet.

SLAC’s klystron gallery, a building that houses components that power the accelerator, sits atop the accelerator. It’s one of the world’s longest modern buildings. Overall, it’s a little less than 2 miles long, a feature that prompts laboratory employees to hold an annual footrace around its perimeter.

Particle accelerators are the closest things we have to time machines, according to Stephen Hawking.

In 2010, physicist Stephen Hawking wrote an article for the UK paper the Daily Mail explaining how it might be possible to travel through time. We would just need a particle accelerator large enough to accelerate humans the way we accelerate particles, he said.

A person-accelerator with the capabilities of the Large Hadron Collider would move its passengers at close to the speed of light. Because of the effects of special relativity, a period of time that would appear to someone outside the machine to last several years would seem to the accelerating passengers to last only a few days. By the time they stepped off the LHC ride, they would be younger than the rest of us.

Hawking wasn’t actually proposing we try to build such a machine. But he was pointing out a way that time travel already happens today. For example, particles called pi mesons are normally short-lived; they disintegrate after mere millionths of a second. But when they are accelerated to nearly the speed of light, their lifetimes expand dramatically. It seems that these particles are traveling in time, or at least experiencing time more slowly relative to other particles.

The highest temperature recorded by a manmade device was achieved in a particle accelerator.

In 2012, Brookhaven National Laboratory’s Relativistic Heavy Ion Collider achieved a Guinness World Record for producing the world’s hottest manmade temperature, a blazing 7.2 trillion degrees Fahrenheit. But the Long Island-based lab did more than heat things up. It created a small amount of quark-gluon plasma, a state of matter thought to have dominated the universe’s earliest moments. This plasma is so hot that it causes elementary particles called quarks, which generally exist in nature only bound to other quarks, to break apart from one another.

Scientists at CERN have since also created quark-gluon plasma, at an even higher temperature, in the Large Hadron Collider.

The inside of the Large Hadron Collider is colder than outer space.

In order to conduct electricity without resistance, the Large Hadron Collider’s electromagnets are cooled down to cryogenic temperatures. The LHC is the largest cryogenic system in the world, and it operates at a frosty minus 456.3 degrees Fahrenheit. It is one of the coldest places on Earth, and it’s even a few degrees colder than outer space, which tends to rest at about minus 454.9 degrees Fahrenheit.

Nature produces particle accelerators much more powerful than anything made on Earth.

We can build some pretty impressive particle accelerators on Earth, but when it comes to achieving high energies, we’ve got nothing on particle accelerators that exist naturally in space.

The most energetic cosmic ray ever observed was a proton accelerated to an energy of 300 million trillion electronvolts. No known source within our galaxy is powerful enough to have caused such an acceleration. Even the shockwave from the explosion of a star, which can send particles flying much more forcefully than a manmade accelerator, doesn’t quite have enough oomph. Scientists are still investigating the source of such ultra-high-energy cosmic rays.

Particle accelerators don’t just accelerate particles; they also make them more massive.

As Einstein predicted in his theory of relativity, no particle that has mass can travel as fast as the speed of light—about 186,000 miles per second. No matter how much energy one adds to an object with mass, its speed cannot reach that limit.

In modern accelerators, particles are sped up to very nearly the speed of light. For example, the main injector at Fermi National Accelerator Laboratory accelerates protons to 0.99997 times the speed of light. As the speed of a particle gets closer and closer to the speed of light, an accelerator gives more and more of its boost to the particle’s kinetic energy.

Since, as Einstein told us, an object’s energy is equal to its mass times the speed of light squared (E=mc2), adding energy is, in effect, also increasing the particles’ mass. Said another way: Where there is more “E,” there must be more “m.” As an object with mass approaches, but never reaches, the speed of light, its effective mass gets larger and larger.

The diameter of the first circular accelerator was shorter than 5 inches; the diameter of the Large Hadron Collider is more than 5 miles.

In 1930, inspired by the ideas of Norwegian engineer Rolf Widerøe, 27-year-old physicist Ernest Lawrence created the first circular particle accelerator at the University of California, Berkeley, with graduate student M. Stanley Livingston. It accelerated hydrogen ions up to energies of 80,000 electronvolts within a chamber less than 5 inches across.

In 1931, Lawrence and Livingston set to work on an 11-inch accelerator. The machine managed to accelerate protons to just over 1 million electronvolts, a fact that Livingston reported to Lawrence by telegram with the added comment, “Whoopee!” Lawrence went on to build even larger accelerators—and to found Lawrence Berkeley and Lawrence Livermore laboratories.

Particle accelerators have come a long way since then, creating brighter beams of particles with greater energies than previously imagined possible. The Large Hadron Collider at CERN is more than 5 miles in diameter (17 miles in circumference). After this year’s upgrades, the LHC will be able to accelerate protons to 6.5 trillion electronvolts.

In the 1970s, scientists at Fermi National Accelerator Laboratory employed a ferret named Felicia to clean accelerator parts.

From 1971 until 1999, Fermilab’s Meson Laboratory was a key part of high-energy physics experiments at the laboratory. To learn more about the forces that hold our universe together, scientists there studied subatomic particles called mesons and protons. Operators would send beams of particles from an accelerator to the Meson Lab via a miles-long underground beam line.

To ensure hundreds of feet of vacuum piping were clear of debris before connecting them and turning on the particle beam, the laboratory enlisted the help of one Felicia the ferret.

Ferrets have an affinity for burrowing and clambering through holes, making them the perfect species for this job. Felicia’s task was to pull a rag dipped in cleaning solution on a string through long sections of pipe.

Although Felicia’s work was eventually taken over by a specially designed robot, she played a unique and vital role in the construction process—and in return asked only for a steady diet of chicken livers, fish heads and hamburger meat.

Particle accelerators show up in unlikely places.

Scientists tend to construct large particle accelerators underground. This protects them from being bumped and destabilized, but can also make them a little harder to find.

For example, motorists driving down Interstate 280 in northern California may not notice it, but the main accelerator at SLAC National Accelerator Laboratory runs underground just beneath their wheels.

Residents in villages in the Swiss-French countryside live atop the highest-energy particle collider in the world, the Large Hadron Collider.

And for decades, teams at Cornell University have played soccer, football and lacrosse on Robison Alumni Fields 40 feet above the Cornell Electron Storage Ring, or CESR. Scientists use the circular particle accelerator to study compact particle beams and to produce X-ray light for experiments in biology, materials science and physics.

Sarah Witman

Terence TaoPolymath8b, X: writing the paper, and chasing down loose ends

This is the tenth thread for the Polymath8b project to obtain new bounds for the quantity

H_m := \liminf_{n \to\infty} p_{n+m} - p_n;

the previous thread may be found here.

Numerical progress on these bounds have slowed in recent months, although we have very recently lowered the unconditional bound on H_1 from 252 to 246 (see the wiki page for more detailed results).  While there may still be scope for further improvement (particularly with respect to bounds for H_m with m=2,3,4,5, which we have not focused on for a while, it looks like we have reached the point of diminishing returns, and it is time to turn to the task of writing up the results.

A draft version of the paper so far may be found here (with the directory of source files here).  Currently, the introduction and the sieve-theoretic portions of the paper are written up, although the sieve-theoretic arguments are surprisingly lengthy, and some simplification (or other reorganisation) may well be possible.  Other portions of the paper that have not yet been written up include the asymptotic analysis of M_k for large k (leading in particular to results for m=2,3,4,5), and a description of the quadratic programming that is used to estimate M_k for small and medium k.  Also we will eventually need an appendix to summarise the material from Polymath8a that we would use to generate various narrow admissible tuples.

One issue here is that our current unconditional bounds on H_m for m=2,3,4,5 rely on a distributional estimate on the primes which we believed to be true in Polymath8a, but never actually worked out (among other things, there was some delicate algebraic geometry issues concerning the vanishing of certain cohomology groups that was never resolved).  This issue does not affect the m=1 calculations, which only use the Bombieri-Vinogradov theorem or else assume the generalised Elliott-Halberstam conjecture.  As such, we will have to rework the computations for these H_m, given that the task of trying to attain the conjectured distributional estimate on the primes would be a significant amount of work that is rather disjoint from the rest of the Polymath8b writeup.  One could simply dust off the old maple code for this (e.g. one could tweak the code here, with the constraint  1080*varpi/13+ 330*delta/13<1  being replaced by 600*varpi/7+180*delta/7<1), but there is also a chance that our asymptotic bounds for M_k (currently given in messy detail here) could be sharpened.  I plan to look at this issue fairly soon.

Also, there are a number of smaller observations (e.g. the parity problem barrier that prevents us from ever getting a better bound on H_1 than 6) that should also go into the paper at some point; the current outline of the paper as given in the draft is not necessarily comprehensive.


Filed under: polymath

Sean CarrollTalks on God and Cosmology

Hey, remember the debate I had with William Lane Craig, on God and Cosmology? (Full video here, my reflections here.) That was on a Friday night, and on Saturday morning the event continued with talks from four other speakers, along with responses by WLC and me. At long last these Saturday talks have appeared on YouTube, so here they are!

First up was Tim Maudlin, who usually focuses on philosophy of physics but took the opportunity to talk about the implications of God’s existence for morality. (Namely, he thinks there aren’t any.)

Then we had Robin Collins, who argued for a new spin on the fine-tuning argument, saying that the universe is constructed to allow for it to be discoverable.

Back to Team Naturalism, Alex Rosenberg explains how the appearance of “design” in nature is well-explained by impersonal laws of physics.

Finally, James Sinclair offered thoughts on the origin of time and the universe.

To wrap everything up, the five of us participated in a post-debate Q&A session.

Enough debating for me for a while! Oh no, wait: on May 7 I’ll be in New York, debating whether there is life after death. (Spoiler alert: no.)

Chad OrzelSuperheros are Anti-Science

I’m not really a comic-book guy, but I’ve watched a bunch of comic-book movies recently. Kate was really fired up for the new Captain America movie, so I finally got around to watching the first one as background for that, then when I was sleep-deprived last week I watched the second Thor movie via on-demand cable, then Sunday evening Kate and I went to see Captain America: The Winter Soldier in the theater (her second time watching it– she’s really fired up).

Mostly, this has served to confirm that I’m not a comic-book guy. I’m just not invested enough in the idea of a movie about these characters to get past the staggering logical inconsistencies in most of the movies. They’re great spectacle, but as soon as I start to think about them at all, they just fall apart.

By itself, that wouldn’t be worth a post– Tastes Vary, end of story. But I was thinking about this while walking the dog this morning, and there’s a sense in which my dissatisfaction with the genre touches on a deeper issue connected to my current obsessions: in a very deep way, superhero stories are anti-scientific.

That may sound like a crazy thing to say, given that there are numerous blog posts and books by people I know using superheros to teach science at varying levels of plausibility. And an awful lot of superheroes, including most of those appearing in the current run of Marvel movies, supposedly originate in science– Captain America was made in a lab (see image above, which I grabbed from this blog post about superhero science) as were his adversaries. But as I’ve been banging on about for months now, science isn’t just about gadgets, it’s a process, and superhero stories in general are fundamentally incompatible with the process of science.

What I mean by this is that the essential nature of the superhero story requires the hero to be singular, or at most a part of a small team. It’s about one person overcoming impossible odds to save the world via their own personal awesomeness and comic-book-science enhancements. These are, at their core, somewhere between power fantasies and a testament to the human spirit.

Science, on the other hand, is all about duplication. One of the most essential– arguably the most essential– steps of the scientific process is telling other scientists what you discovered. Whereupon they go into their own labs and duplicate what you did, and tease out further implications of it, and so on. The sharing of results is what lets the next generation of scientists stand on the shoulders of past giants.

And that’s fundamentally incompatible with superhero stories. If you could really make superheroes by scientific means, they would quickly stop being superheroes, because they’d be everywhere in short order. Because that’s what science does– it starts with an unusual event, moves to a general principle, and then expands to go everywhere. Even when the original scientists don’t expect it– when Heinrich Hertz confirmed Maxwell’s prediction of electromagnetic waves, he famously shrugged it off as a curiosity: “It’s of no use whatsoever[...] this is just an experiment that proves Maestro Maxwell was right—we just have these mysterious electromagnetic waves that we cannot see with the naked eye. But they are there” (this particular wording via Wikipedia because I’m lazy). A decade or so later, we had radio.

There are ways to get around this, but they generally involve breaking the scientific process in some way. A common one is to invoke the mistaken notion of the ahead-of-their-time genius. Captain America is singular because the only guy who knew how to make super-soldiers was shot in the first movie. Nobody else could figure it out, because Dr. Erskine was a genius.

Except, that doesn’t work even within the context of the movies. Erskine was one of three people who had some success with super-soldier research, the others being Johann Schmidt and Arnim Zola– granted, they were less successful than Erskine was, but the notion that nobody in the intervening seventy years could do better beggars belief. And there have to be people who know an understand bits of the process– note all the folks in white lab coats in the image up top. Surely some of them could build on that knowledge.

And if you try to look in history for evidence of ahead-of-their-time geniuses, the evidence is scant. Feynman is touted as a genius, except Schwinger and Tomonaga solved QED at the same time, and Stueckelberg got there earlier but failed to communicate his ideas. And Feynman’s real genius lay in making QED comprehensible to non-geniuses– that is, in communicating it to others. Dyson played a pretty major role with that, as well, showing that all three versions were equivalent, and nailing down the loose ends in Feynman’s approach.

Einstein gets hailed as a genius, but Poicaré and Lorentz were close to Special Relativity, and Hilbert almost scooped him on General Relativity by virtue of understanding the math better. And within a couple of months of the publication of General Relativity, Schwarzchild worked out a detailed solution to a real situation while in the trenches on the Eastern Front of WWI. That’s not an indicator that Einstein was miles ahead of his contemporaries. Either that, or there was an unusually high genius density in 20th century physics.

There are a few myths in the history of technology that might seem to resemble comic-book super-science, but none of them really pan out. Nikola Tesla has an army of fanboys, but the great ideas he had that actually worked were not all that advanced– and usually got tied up in nasty priority disputes with other inventors who claimed to have the same idea. The ideas that were uniquely his mostly didn’t work, and some of them were just nutty. Leonardo Da Vinci is another example that gets busted out– he invented the helicopter!– but most of his “inventions” were more inspirational than technological. He had nifty ideas that sorta-kinda resemble modern inventions, if you tilt your head and squint like a confused dog, but he didn’t build working versions of any of them, because it wasn’t possible to make working examples with the technology he had available.

To find genuine examples of inventions or discoveries that were developed once and not replicated, you need to go back to before the beginnings of institutional science– things like the ancient Chinese camera obscura of Mozi cited in last week’s Cosmos, or Cornelis Drebbel maybe inventing air conditioning. But those are kind of dubious examples in a lot of ways, and nothing like you see in superhero stories. And the fact that they’re pre-modern supports the general point.

The other way to fix this is to push the story wholly out of the realm of science. So, for example, you could claim that the reason the very public existence of magic Nazi technology in 194mumble hasn’t transformed the world in a more comprehensive way is that all those disintegrator guns Hydra was using need to be powered by magic blue-glowing alien technology that operates on principles beyond human science, blah blah, Clarke’s Law. Except the principles behind that technology were evidently completely understandable by Dr. Zola in 194mumble, who built all that alien-powered stuff in the first place. Given seventy years and the many dozens of examples of working alien-powered gadgets from all the magic Nazis who get gunned down in the first movie, it’s a little hard to believe that movie-present technology isn’t way more advanced.

(You can maybe claim that all of this was suppressed by SHIELD/ Hydra/ The Illuminati, but then you get into the inherent implausibility of grand secret conspiracies. which is a whole different argument…)

The final way out is to attribute things to the singular properties of individuals, which is the route the first Captain American movie seems to be taking. That is, there’s only one Captain America because Steve Rogers’s personal qualities are so great that he alone could survive the super-soldier process without becoming a monster. Which, okay, I guess you can go there. But that pushes you into an entirely different category, the Chosen One story, which is itself fundamentally unscientific.

So, again, there’s a very deep sense in which superhero stories– even stories about technologically created superheros– are fundamentally incompatible with science. Which, now that I’ve realized this, I think is a big part of why I’m not really a comic book guy in the first place.

————

Please note: I’m not saying that the incompatibility between the superhero genre and science means that superhero stories are Bad, or that the millions of people who enjoy these stories are Bad People. There’s no requirement that everything be scientific at the core, and certainly no requirement that everyone share my tastes. I read and enjoy lots of fiction that is every bit as non-scientific as the core superhero story– I’m rather fond of epic fantasy of the Chosen One variety, for example. But I think there is this tension at the core of the stories, and that seemed like something worth poking at a bit.

Steinn SigurðssonPurty Vacant

It goes like this…


The Kingswoods

Best Punkabilly ever finally on youtube!

The Original “Pretty Vacant” for reference,
for you young ‘uns…

Clifford JohnsonBeautiful Randomness

Spotted in the hills while out walking. Three chairs left out to be taken, making for an enigmatic gathering at the end of a warm Los Angeles Spring day... random_chairs_la_14_04_14 I love this city. -cvj Click to continue reading this post

April 14, 2014

Clifford JohnsonTotal Lunar Eclipse!

There is a total eclipse of the moon tonight! It is also at not too inconvenient a time (relatively speaking) if you're on the West Coast. The eclipse begins at 10:58pm (Pacific) and gets to totality by 12:46am. This is good timing for me since I'd been meaning to set up the telescope and look at the moon recently anyway, and a full moon can be rather bright. Now there'll be a natural filter in the way, indirectly - the earth! There's a special event up at the Griffith Observatory if you are interested in making a party out of it. It starts at 7:00pm and you can see more about the [...] Click to continue reading this post

Michael NielsenHow the backpropagation algorithm works

Chapter 2 of my free online book about “Neural Networks and Deep Learning” is now available. The chapter is an in-depth explanation of the backpropagation algorithm. Backpropagation is the workhorse of learning in neural networks, and a key component in modern deep learning systems. Enjoy!

Andrew Jaffe“Public Service Review”?

A few months ago, I received a call from someone at the “Public Service Review”, supposedly a glossy magazine distributed to UK policymakers and influencers of various stripes. The gentleman on the line said that he was looking for someone to write an article for his magazine giving an example of what sort of space-related research was going on at a prominent UK institution, to appear opposite an opinion piece written by Martin Rees, president of the Royal Society.

This seemed harmless enough, although it wasn’t completely clear what I (or the Physics Department, or Imperial College) would get out of it. But I figured I could probably knock something out fairly quickly. However, he told me there was a catch: it would cost me £6000 to publish the article. And he had just ducked out of his editorial meeting in order to find someone to agree to writing the article that very afternoon. Needless to say, in this economic climate, I didn’t have an account with an unused £6000 in it, especially for something of dubious benefit. (On the other hand, astrophysicists regularly publish in journals with substantial page charges.) It occurred to me that this could be a scam, although the website itself seems legitimate (although no one I spoke to knew anything about it).

I had completely forgotten about this until this week, when another colleague in our group at Imperial told me had received the same phone call, from the same organization, with the same details: article to appear opposite Lord Rees’; short deadline; large fee.

So, this is beginning to sound fishy. Has anyone else had any similar dealings with this organization?

Update: It has come to my attention that one of the comments below was made under a false name, in particular the name of someone who actually works for the publication in question, so I have removed the name, and will possibly likely the comment unless the original write comes forward with more and truthful information (which I will not publish without permission). I have also been informed of the possibility that some other of the comments below may come from direct competitors of the publication. These, too, may be removed in the absence of further confirming information.

Update II: In the further interest of hearing both sides of the discussion, I would like to point out the two comments from staff at the organization giving further information as well as explicit testimonials in their favor.

Matt StrasslerA Lunar Eclipse Overnight

Overnight, those of you in the Americas and well out into the Pacific Ocean, if graced with clear skies, will be able to observe what is known as “a total eclipse of the Moon” or a “lunar eclipse”. The Moon’s color will turn orange for about 80 minutes, with mid-eclipse occurring simultaneously in all the areas in which the eclipse is visible: 3:00-4:30 am for observers in New York, 12:00- 1:30 am for observers in Los Angeles, and so forth. [As a bonus, Mars will be quite near the Moon, and about as bright as it gets; you can't miss it, since it is red and much brighter than anything else near the Moon.]

Since the Moon is so bright, you will be able to see this eclipse from even the most light-polluted cities. You can read more details of what to look for, and when to look for it in your time zone, at many websites, such as http://www.space.com/25479-total-lunar-eclipse-2014-skywatching-guide.html  However, many of them don’t really explain what’s going on.

One striking thing that’s truly very strange about the term “eclipse of the Moon” is that the Moon is not eclipsed at all. The Moon isn’t blocked by anything; it just becomes less bright than usual. It’s the Sun that is eclipsed, from the Moon’s point of view. See Figure 1. To say this another way, the terms “eclipse of the Sun” and “eclipse of the Moon”, while natural from the human-centric perspective, hide the fact that they really are not analogous. That is, the role of the Sun in a “solar eclipse” is completely different from the role of the Moon in a “lunar eclipse”, and the experience on Earth is completely different. What’s happening is this:

  • a “total eclipse of the Sun” is an “eclipse of the Sun by the Moon that leaves a shadow on the Earth.”
  • a “total eclipse of the Moon” is an “eclipse of the Sun by the Earth that leaves a shadow on the Moon.”

In a total solar eclipse, lucky humans in the right place at the right time are themselves, in the midst of broad daylight, cast into shadow by the Moon blocking the Sun. In a total lunar eclipse, however, it is the entire Moon that is cast into shadow; we, rather than being participants, are simply observers at a distance, watching in our nighttime as the Moon experiences this shadow. For us, nothing is eclipsed, or blocked; we are simply watching the effect of our own home, the Earth, eclipsing the Sun for Moon-people.

Fig. 1: In a "total solar eclipse", a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed.  In a "total lunar eclipse", the Earth casts a huge shadow across the entire Moon;

Fig. 1: In a “total solar eclipse”, a small shadow is cast by the Moon upon the Earth; at that spot the Sun appears to be eclipsed by the Moon. In a “total lunar eclipse”, the Earth casts a huge shadow across the entire Moon; on the near side of the Moon, the Sun appears to be eclipsed by the Earth.   The Moon glows orange because sunlight bends around the Earth through the Earth’s atmosphere; see Figure 2.  Picture is not to scale; the Sun is 100 times the size of the Earth, and much further away than shown.

Simple geometry, shown in Figure 1, assures that the first type of eclipse always happens at “new Moon”, i.e., when the Moon would not be visible in the Earth’s sky at night. Meanwhile the second type of eclipse, also because of geometry, only occurs on the night of the “full Moon”, when the entire visible side of the Moon is (except during an eclipse) in sunlight. Only then can the Earth block the Sun, from the Moon’s point of view.

An total solar eclipse — an eclipse of the Sun by the Moon, as seen from the Earth — is one of the nature’s most spectacular phenomena. [I am fortunate to speak from experience; put this on your bucket list.] That is both because we ourselves pass into darkness during broad daylight, creating an amazing light show, and even more so because, due to an accident of geometry, the Moon and Sun appear to be almost the same size in the sky: the Moon, though 400 times closer to the Earth than the Sun, happens to be just about 400 times smaller in radius than the Sun. What this means is that the Sun’s opaque bright disk, which is all we normally see, is almost exactly blocked by the Moon; but this allows the dimmer (but still bright!) silvery corona of the Sun, and the pink prominences that erupt off the Sun’s apparent “surface”, to become visible, in spectacular fashion, against a twilight sky. (See Figure 2.) This geometry also implies, however, that the length of time during which any part of the Earth sees the Sun as completely blocked is very short — not more than a few minutes — and that very little of the Earth’s surface actually goes into the Moon’s shadow (see Figure 1).

No such accident of geometry affects an “eclipse of the Moon”. If you were on the Moon, you would see the Earth in the sky as several times larger than the Sun, because the Earth, though about 400 times closer to the Moon than is the Sun, is only about 100 times smaller in radius than the Sun. Thus, the Earth in the Moon’s sky looks nearly four times as large, from side to side (and 16 times as large in apparent area) as does the Moon in the Earth’s sky.  (In short: Huge!) So when the Earth eclipses the Sun, from the Moon’s point of view, the Sun is thoroughly blocked, and remains so for as much as a couple of hours.

But that’s not to say there’s no light show; it’s just a very different one. The Sun’s light refracts through the Earth’s atmosphere, bending around the earth, such that the Earth’s edge appears to glow bright orange or red (depending on the amount of dust and cloud above the Earth.) This ring of orange light amid the darkness of outer space must be quite something to behold! Thus the Moon, instead of being lit white by direct sunlight, is lit by the unmoonly orange glow of this refracted light. The orange light then reflects off the Moon’s surface, and some travels back to Earth — allowing us to see an orange Moon. And we can see this from any point on the Earth for which the Moon is in the sky — which, during a full Moon, is (essentially) anyplace where the Sun is down.  That’s why anyone in the Americas and eastern Pacific Ocean can see this eclipse, and why we all see it simultaneously [though, since we're in different time zones, our clocks don't show the same hour.]

Since lunar eclipses (i.e. watching the Moon move into the Earth’s shadow) can be seen simultaneously across any part of the Earth where it is dark during the eclipse, they are common. I have seen two lunar eclipses at dawn, one at sunset, and several in the dark of night; I’ve seen the moon orange, copper-colored, and, once, blood red. If you miss one total lunar eclipse due to clouds, don’t worry; there will be more. But a total solar eclipse (i.e. standing in the shadow of the Moon) can only be seen and appreciated if you’re actually in the Moon’s shadow, which affects, in each eclipse, only a tiny fraction of the Earth — and often a rather inaccessible fraction. If you want to see one, you’ll almost certainly have to plan, and travel. My advice: do it.  Meanwhile, good luck with the weather tonight!


Filed under: Astronomy Tagged: astronomy

n-Category Café universo.math

A new Spanish language mathematical magazine has been launched: universo.math. Hispanophones should check out the first issue! There are some very interesting looking articles which cover areas from art through politics to research-level mathematics.

The editor-in-chief is my mathematical brother Jacob Mostovoy and he wants it to be a mix of Mathematical Intellingencer, Notices of the AMS and the New Yorker, together with less orthodox ingredients; the aim is to keep the quality high.

Besides Jacob, the contributors to the first issue that I recognise include Alberto Verjovsky, Ernesto Lupercio and Edward Witten, so universo.math seems to be off to a high quality start.

John PreskillTsar Nikita and His Scientists

Once upon a time, a Russian tsar named Nikita had forty daughters:

                Every one from top to toe
                Was a captivating creature,
                Perfect—but for one lost feature.

 
So wrote Alexander Pushkin, the 19th-century Shakespeare who revolutionized Russian literature. In a rhyme, Pushkin imagined forty princesses born without “that bit” “[b]etween their legs.” A courier scours the countryside for a witch who can help. By summoning the devil in the woods, she conjures what the princesses lack into a casket. The tsar parcels out the casket’s contents, and everyone rejoices.

“[N]onsense,” Pushkin calls the tale in its penultimate line. A “joke.”

The joke has, nearly two centuries later, become reality. Researchers have grown vaginas in a lab and implanted them into teenage girls. Thanks to a genetic defect, the girls suffered from Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome: Their vaginas and uteruses had failed to grow to maturity or at all. A team at Wake Forest and in Mexico City took samples of the girls’ cells, grew more cells, and combined their harvest with vagina-shaped scaffolds. Early in the 2000s, surgeons implanted the artificial organs into the girls. The patients, the researchers reported in the journal The Lancet last week, function normally.

I don’t usually write about reproductive machinery. But the implants’ resonance with “Tsar Nikita” floored me. Scientists have implanted much of Pushkin’s plot into labs. The sexually deficient girls, the craftsperson, the replacement organs—all appear in “Tsar Nikita” as in The Lancet. In poetry as in science fiction, we read the future.

Though threads of Pushkin’s plot survive, society’s view of the specialist has progressed. “Deep [in] the dark woods” lives Pushkin’s witch. Upon summoning the devil, she locks her cure in a casket. Today’s vagina-implanters star in headlines. The Wall Street Journal highlighted the implants in its front section. Unless the patients’ health degrades, the researchers will likely list last week’s paper high on their CVs and websites.

http://news.wfu.edu/2011/05/31/research-park-updates-to-be-presented/, http://www.orderwhitemoon.org/goddess/babayaga/BabaYaga.html

Much as Dr. Atlántida Raya-Rivera, the paper’s lead author, differs from Pushkin’s witch, the visage of Pushkin’s magic wears the nose and eyebrows of science. When tsars or millenials need medical help, they seek knowledge-keepers: specialists, a fringe of society. Before summoning the devil, the witch “[l]ocked her door . . . Three days passed.” I hide away to calculate and study (though days alone might render me more like the protagonist in another Russian story, Chekhov’s “The Bet”). Just as the witch “stocked up coal,” some students stockpile Red Bull before hitting the library. Some habits, like the archetype of the wise woman, refuse to die.

From a Russian rhyme, the bones of “Tsar Nikita” have evolved into cutting-edge science. Pushkin and the implants highlight how attitudes toward knowledge have changed, offering a lens onto science in culture and onto science culture. No wonder readers call Pushkin “timeless.”

But what would he have rhymed with “Mayer-Rokitansky-Küster-Hauser”?

 

 

 

“Tsar Nikita” has many nuances—messages about censorship, for example—that I didn’t discuss. To the intrigued, I recommend The Queen of Spades: And selected works, translated by Anthony Briggs and published by Pushkin Press.

 


Quantum DiariesMoriond 2014 : de nouveaux résultats, de nouvelles explorations… mais pas de nouvelle physique

Même avant mon départ pour La Thuile (Italie), les résultats des Rencontres de Moriond remplissaient déjà les fils d’actualités. La session de cette année sur l’interaction électrofaible, du 15 au 22 mars, a débuté avec la première « mesure mondiale » de la masse du quark top, basée sur la combinaison des mesures publiées jusqu’à présent par les expériences Tevatron et LHC. La semaine s’est poursuivie avec un résultat spectaculaire de CMS sur la largeur du Higgs.

Même si elle approche de son 50e anniversaire, la conférence de Moriond est restée à l’avant-garde. Malgré le nombre croissant de conférences incontournables en physique des hautes énergies, Moriond garde une place de choix dans la communauté, pour des raisons en partie historiques : cette conférence existe depuis 1966 et elle s’est imposée comme l’endroit où les théoriciens et les expérimentateurs viennent pour voir et être vus. Regardons maintenant ce que les expériences du LHC nous ont réservé cette année…

Nouveaux résultats­­­

Cette année, le clou du spectacle à Moriond a bien entendu été l’annonce de la meilleure limite à ce jour pour la largeur du Higgs, à < 17 MeV avec 95 % de confiance, présentée aux deux sessions de Moriond par l’expérience CMS. La nouvelle mesure, obtenue par une nouvelle méthode d’analyse basée sur les désintégrations du Higgs en deux particules Z, est environ 200 fois plus précise que les précédentes. Les discussions sur cette limite ont porté principalement sur la nouvelle méthode utilisée pour l’analyse. Quelles hypothèses étaient nécessaires ? La même technique pouvait-elle être appliquée à un Higgs se désintégrant en deux bosons W ? Comment cette nouvelle largeur allait-elle influencer les modèles théoriques pour la nouvelle physique ? Nous le découvrirons sans doute à Moriond l’année prochaine…

L’annonce du premier résultat mondial conjoint pour la masse du quark top a aussi suscité un grand enthousiasme. Ce résultat, qui met en commun les données du Tevatron et du LHC, constitue la meilleure valeur jusqu’ici, au niveau mondial, à 173,34 ± 0,76 GeV/c2. Avant que l’effervescence ne soit retombée à la session de QCD de Moriond, CMS a annoncé un nouveau résultat préliminaire fondé sur l’ensemble des données collectées à 7 et 8 TeV. Ce résultat est à lui seul d’une précision qui rivalise avec celle de la moyenne mondiale, ce qui démontre clairement que nous n’avons pas encore atteint la plus grande précision possible pour la masse du quark top.

ot0172hCe graphique montre les quatre mesures de la masse du quark top publiées respectivement par les collaborations ATLAS, CDF, CMS et D0, ainsi que la mesure la plus précise à ce jour obtenue grâce à l’analyse conjointe.

D’autres nouveautés concernant le quark top, entre autres les nouvelles mesures précises de son spin et de sa polarisation issues du LHC, ainsi que les nouveaux résultats d’ATLAS pour la section efficace du quark top isolé dans le canal de désintégration t, ont été présentés par Kate Shaw le mardi 25 mars. La période II du LHC permettra d’approfondir encore notre compréhension du sujet.

Une mesure fondamentale et délicate permettant d’explorer la nature de la brisure de la symétrie électrofaible portée par le mécanisme de Brout-Englert-Higgs est celle de la diffusion de deux bosons vecteurs massifs. Cet événement est rare, mais en l’absence du boson de Higgs sa fréquence augmenterait fortement avec l’énergie de la collision, jusqu’à enfreindre les lois de la physique. Un indice de la collision d’un boson vecteur de force électrofaible a été détecté pour la première fois par ATLAS dans des événements impliquant deux leptons de même charge et deux jets présentant une grande différence de rapidité.

S’appuyant sur l’augmentation du volume de données et une meilleure analyse de celles-ci, les expériences du LHC s’attaquent à des états finaux multi-particules rares et difficiles qui font intervenir le boson de Higgs. ATLAS en a présenté un excellent exemple, avec un nouveau résultat dans la recherche de la production d’un Higgs associé à deux quarks top et se désintégrant en une paire de quarks b. Avec une limite prévue de 2,6 fois la prédiction du Modèle standard pour ce seul canal et une intensité de signal relative observée de 1,7 ± 1,4, la future exploitation à haute énergie du LHC, avec laquelle la fréquence de cet événement augmentera, suscite de grands espoirs.

Dans le même temps, dans le monde des saveurs lourdes, l’expérience LHCb a présenté des analyses supplémentaires de l’état exotique X(3872). L’expérience a confirmé de manière non ambiguë que ses nombres quantiques Jpc sont 1++ et a mis en évidence sa désintégration en ψ(2S)γ.

L’étude du plasma de quarks et de gluons se poursuit dans l’expérience ALICE, et les discussions ont porté surtout sur les résultats de l’exploitation du LHC en mode proton-plomb (p-Pb). En particulier, la « double crête » nouvellement observée dans les collisions p-Pb est étudiée en détail, et des analyses du pic de ses jets, de sa distribution de masse et de sa dépendance à la charge ont été présentées.

Nouvelles explorations

Grâce à notre nouvelle compréhension du boson de Higgs, le LHC est entré dans l’ère de la physique du Higgs de précision. Notre connaissance des propriétés du Higgs – par exemple les mesures de son spin et de sa largeur – s’est améliorée, et les mesures précises des interactions et des désintégrations du Higgs ont elles aussi bien progressé. Des résultats relatifs à la recherche d’une physique au-delà du Modèle standard ont également été présentés, et les expériences du LHC continuent de s’investir intensément dans la recherche de la supersymétrie.

En ce qui concerne le secteur de Higgs, de nombreux chercheurs espèrent trouver les cousins supersymétriques du Higgs et des bosons électrofaibles, appelés neutralinos et charginos, par l’intermédiaire de processus électrofaibles. ATLAS a présenté deux nouveaux articles résumant de multiples recherches en quête de ces particules. L’absence d’un signal significatif a été utilisée pour définir des limites d’exclusion pour les charginos et les neutralinos, soit 700 GeV – s’ils se désintègrent via des partenaires supersymétriques intermédiaires de leptons – et 420 GeV – quand ils se désintègrent seulement via des bosons du Modèle standard.

Par ailleurs, pour la première fois, une recherche du mode électrofaible le plus difficile à observer, produisant une paire de charginos qui se désintègrent en bosons W, a été entreprise par ATLAS. Ce mode ressemble à celui de la production de paires de W du Modèle standard, dont le taux mesuré actuellement paraît légèrement plus élevé que prévu.

Dans ce contexte, CMS a présenté de nouveaux résultats dans la recherche de la production d’une paire électrofaible de higgsinos via leur désintégration en un Higgs (à 125 GeV) et un gravitino de masse presque nulle. L’état final montre une signature caractéristique de jets de quatre quarks b, compatible avec une cinématique de double désintégration du Higgs. Un léger excès du nombre d’événements candidats signifie que l’expérience ne peut pas exclure un signal de higgsino. On établit des limites supérieures de l’intensité du signal d’environ deux fois la prédiction théorique pour des masses du higgsino comprises entre 350 et 450 GeV.

Dans plusieurs scénarios de supersymétrie, les charginos peuvent être métastables et ils pourraient potentiellement être détectés sous la forme de particules à durée de vie longue. CMS a présenté une recherche innovante de particules génériques chargées à durée de vie longue, effectuées en cartographiant l’efficacité de détection en fonction de la cinématique de la particule et de la perte d’énergie dans le trajectographe. Cette étude permet non seulement d’établir des limites strictes pour divers modèles supersymétriques qui prédisent une durée de vie du chargino (c*tau) supérieure à 50 cm mais elle fournit également un puissant outil à la communauté des théoriciens pour tester de manière indépendante les nouveaux modèles prédisant des particules chargées à durée de vie longue.

Afin d’être aussi général que possible dans la recherche de la supersymétrie, CMS a également présenté les résultats de nouvelles recherches, dans lesquelles un grand sous-ensemble des paramètres de la supersymétrie, tels que les masses du gluino et du squark, sont testés pour vérifier leur compatibilité statistique avec différentes mesures expérimentales. Cela a permis d’établir une carte des probabilités dans un espace à 19 dimensions. Cette carte montre notamment que les modèles prédisant des masses inférieures à 1,2 TeV pour le gluino et inférieures à 700 GeV pour le sbottom et le stop sont fortement défavorisés.

mais pas de nouvelle physique

Malgré toute ces recherches minutieuses, ce qu’on a le plus entendu à Moriond, c’était: « pas d’excès observé » – « cohérent avec le Modèle standard ». Tous les espoirs reposent maintenant sur la prochaine exploitation du LHC, à 13 TeV. Si vous souhaitez en savoir davantage sur les perspectives ouvertes par la deuxième exploitation du LHC, consultez l’article suivant du Bulletin du CERN: “La vie est belle à 13 TeV“.

En plus des divers résultats des expériences du LHC qui ont été présentés, des nouvelles ont aussi été rapportées à Moriond par les expériences du Tevatron, de BICEP, de RHIC et d’autres expériences. Pour en savoir plus, consultez les sites internet de la conférence, Moriond EW et Moriond QCD.

Chad OrzelCosmos Reboot Gets Small

A diabolical psychologist brings a mathematician in for an experiment. The mathematician is seated in a chair on a track leading to a bed on which there is an extremely attractive person of the appropriate gender, completely naked. The psychologist explains “This person will do absolutely anything you want, subject to one condition: every five minutes, we will move your chair across one-half of the distance separating you.”

The mathematician explodes in outrage. “What! It’ll take an infinite time to get there. This is torture!” They storm out.

The next experimental subject is a physicist, who sits in the chair and gets the same explanation. “Awesome!” says the physicist. “Let’s get started!”

The psychologist is taken aback. “You do realize that you’ll never get all the way there, right?”

“Oh, sure,” says the physicist, “but I’ll get close enough for all practical purposes.”

This week’s episode of Cosmos went from the whopping huge to the very small, working their way down from tardigrades to nuclei. This was front-loaded with a bunch of biochem content, and only got around to physics and astrophysics much later.

As always, the visuals were spectacular, particularly the animated microscopic organisms. I was a little puzzled, though, by the decision to go from a quasi-photographic rendition of plant cells and orgnelles to a visualized metaphorical machine representing the action of photosynthesis. I mean, it was nice animation, and all, but kind of jarring after all the very literal stuff that came before it.

There was less historical content in this one, which is probably to the good. There was a slightly overdone bit about Darwin predicting the existence of long-tongued insects to pollinate particular species of orchid (which is mentioned in the Origin but not given especially heavy emphasis). And there was the obligatory call-back to the ancient Greek atomists, where they deserve credit for not just starting with Democritus, but going back to Thales a century or so earlier. The Greek cartoons got in the obligatory anti-religion message, and suffered from the usual problem that the ideas of the atomist Greeks were not actually all that similar to the modern concept of atoms. It wouldn’t be the Cosmos reboot without some annoyingly ahistorical content.

The “Keanu whoa” moment for the week was the claim that we never really touch anything, since the electromagnetic repulsion between molecules making up solid objects means there’s always a microscopic space between even objects that appear to be in contact. Which is true enough, but mostly just reminds me of the joke at the top of this post.

I was mostly okay with the discussion of atoms and nuclei; it could’ve used more quantum-mechanical content, but I suspect they think they got that out of the way last week. I will note that it’s not really hot enough in the core of the Sun (according to our best models) for protons to be moving fast enough to directly come in contact and fuse– you can calculate the necessary temperature for the distance of closest approach of two positive charges to be on the scale of a nucleus, and it’s around 15,000,000,000 K. The actual temperature of the Sun is more like 10,000,000 K. But quantum mechanics allows a tiny probability for one proton to tunnel through to the other, and allows fusion to proceed. That, to my mind, is more awesome than the “We never really touch anything”stuff, but then, I’m a physicist.

The last bit was about neutrinos, both as an element of observational astrophysics– shoutout to an animated supernova 1987a, but as usual none of the actual photos of the supernova outshining everything else– and as a possible probe of the early universe through the relic neutrinos created in the Big Bang. This was illustrated with a Wolfgang Pauli hologram, but next to no detail about Pauli himself, which I found a little disappointing because he’s an entertaining figure. Fun trivia: his famous prediction of the existence of the neutrino as a desperate remedy for the problem of beta decay (the explanation of which was a little garbled, but whatever) was via a letter sent to a conference that he was skipping because he wanted to attend a ball in Zurich. So much for the image of the antisocial physicist…

Anyway, a mostly good episode. The gaps in the science were either biological things that I didn’t notice, or subtle-ish points of physics that nobody else will notice. I could’ve done with less “matter is empty space!” and more quantum physics (or a discussion of Rutherford, because I never get tired of Rutherford), but I’m probably in a small minority there.

Quantum DiariesOn the Shoulders of…

My first physics class wasn’t really a class at all. One of my 8th grade teachers noticed me carrying a copy of Kip Thorne’s Black Holes and Time Warps, and invited me to join a free-form book discussion group on physics and math that he was holding with a few older students. His name was Art — and we called him by his first name because I was attending, for want of a concise term that’s more precise, a “hippie” school. It had written evaluations instead of grades and as few tests as possible; it spent class time on student governance; and teachers could spend time on things like, well, discussing books with a few students without worrying about whether it was in the curriculum or on the tests. Art, who sadly passed some years ago, was perhaps best known for organizing the student cafe and its end-of-year trip, but he gave me a really great opportunity. I don’t remember learning anything too specific about physics from the book, or from the discussion group, but I remember being inspired by how wonderful and crazy the universe is.

My second physics class was combined physics and math, with Dan and Lewis. The idea was to put both subjects in context, and we spent a lot of time on working through how to approach problems that we didn’t know an equation for. The price of this was less time to learn the full breadth subjects; I didn’t really learn any electromagnetism in high school, for example.

When I switched to a new high school in 11th grade, the pace changed. There were a lot more things to learn, and a lot more tests. I memorized elements and compounds and reactions for chemistry. I learned calculus and studied a bit more physics on the side. In college, where the physics classes were broad and in depth at the same time, I needed to learn things fast and solve tricky problems too. By now, of course, I’ve learned all the physics I need to know — which is largely knowing who to ask or which books to look in for the things I need but don’t remember.

There are a lot of ways to run schools and to run classes. I really value knowledge, and I think it’s crucial in certain parts of your education to really buckle down and learn the facts and details. I’ve also seen the tremendous worth of taking the time to think about how you solve problems and why they’re interesting to solve in the first place. I’m not a high school teacher, so I don’t think I can tell the professionals how to balance all of those goods, which do sometimes conflict. What I’m sure of, though, is that enthusiasm, attention, and hard work from teachers is a key to success no matter what is being taught. The success of every physicist you will ever see on Quantum Diaries is built on the shoulders of the many people who took the time to teach and inspire them when they were young.

Quantum DiariesLHC Scientists face major setback

1st April 2014. The LHC is currently in shutdown in preparation for the next physics run in 2015. However the record breaking accelerator is danger is falling far behind schedule as the engineers struggle with technical difficulties 100m below ground level.

The LHC tunnels house the 27km long particle accelerator in carefully controlled conditions. When the beams circulate they must be kept colder than anywhere else in the solar system, and with a vacuum more empty the voids of outer space. Any disruption to the cryogenic cooling systems or the vacuum systems can place serious strain on the operations timetable, and engineers have found signs of severe damage.

Scientists patrol the LHC, inspecting the damaged areas.

Scientists patrol the LHC, inspecting the damaged areas.

The first indications of problems were identified coming from Sector 7 between areas F and H. Cryogenics expert, Francis Urquhart said “My team noticed dents in the service pipes about 50cm from the floor. There was also a deposit of white fibrous foreign matter on some of the cable trays.” The pipes were replaced, but the damage returned the following day, and small black aromatic samples were found piled on the floor. These were sent for analysis and after chemical tests confirmed that they contained no liquid Helium, and that radiometry found they posed no ionisation risk, they were finally identified as Ovis aries depositions.

Ovis aries are found throughout the CERN site, so on-site contamination could not be ruled out. It is currently thought that the specimens entered the Super Proton Synchrotron (SPS) accelerator and proceeded from the SPS to the LHC, leaving deposits as they went. The expert in charge, Gabriella Oak, could not be reached for comment, but is said to be left feeling “rather sheepish”.

Elsewhere on the ring there was another breach of the security protocols as several specimens of Bovinae were found in the ring. The Bovinae are common in Switzerland and it due to their size, must have entered via one of the service elevators. All access points and elevators at the LHC are carefully controlled using biometry and retinal scans, making unauthorised entry virtually impossible. Upon being asked whether the Bovinae had been seen scanning their retinae at the security checkpoints, Francis Urquhart replied “You might very well think that. I could not possibly comment.” While evidence of such actions cannot be found CCTV footage, there have been signs of chewed cud found on the floor, and Bovinae deposits, which are significantly larger than the Ovis deposits, owing to the difference in size.

The retinal scans at the LHC are designed exclusively for human use. A search of the biometric record database show at least one individual (R Wiggum) with unusual retinae, affiliated to “Bovine University”.

It is not known exactly how much fauna is currently in the LHC tunnels, although it is thought to be at least 25 different specimens. They can be identified by the bells they carry around their necks, which can sound like klaxons when they charge. Until the fauna have been cleared, essential repair work is extremely difficult. “I was repairing some damage caused by a passing cow” said Stanford PhD student Cecilia, “when I thought I heard the low oxygen klaxon. By the time I realised it was just two sheep I had already put on my safety mask and pulled the alarm to evacuate the tunnels.” She then commented “It took us three hours to get access to the tunnels again, and the noises and lights had caused the animals to panic, creating even more damage to clean up.”

This is not the first time a complex of tunnels has been overrun by farm animals. In the early 90s the London Underground was found to be infested with horses, which turned into a longterm problem and took many years to resolve.

Current estimates on the delay to the schedule range from a few weeks to almost a decade. Head of ATLAS operations, Dr Remy Beauregard Hadley, comments “I can’t believe all this has happened. They talk about Bovinae deposits delaying the turn on, and I think it’s just a load of bullshit!”

Tommaso DorigoAldo Menzione And The Design Of The Silicon Vertex Detector

Below is a clip from a chapter of my book where I describe the story of the silicon microvertex detector of the CDF experiment. CDF collected proton-antiproton collisions from the Tevatron collider in 1985, 1987-88, 1992-96, and 2001-2011. Run 1A occurred in 1992, and it featured for the first time in a hadron collider a silicon strip detector, the SVX. The SVX would prove crucial for the discovery of the top quark.

read more

Andrew JaffeAcademic Blogging Still Dangerous?

Nearly a decade ago, blogging was young, and its place in the academic world wasn’t clear. Back in 2005, I wrote about an anonymous article in the Chronicle of Higher Education, a so-called “advice” column admonishing academic job seekers to avoid blogging, mostly because it let the hiring committee find out things that had nothing whatever to do with their academic job, and reject them on those (inappropriate) grounds.

I thought things had changed. Many academics have blogs, and indeed many institutions encourage it (here at Imperial, there’s a College-wide list of blogs written by people at all levels, and I’ve helped teach a course on blogging for young academics). More generally, outreach has become an important component of academic life (that is, it’s at least necessary to pay it lip service when applying for funding or promotions) and blogging is usually seen as a useful way to reach a wide audience outside of one’s field.

So I was distressed to see the lament — from an academic blogger — “Want an academic job? Hold your tongue”. Things haven’t changed as much as I thought:

… [A senior academic said that] the blog, while it was to be commended for its forthright tone, was so informal and laced with profanity that the professor could not help but hold the blog against the potential faculty member…. It was the consensus that aspiring young scientists should steer clear of such activities.

Depending on the content of the blog in question, this seems somewhere between a disregard for academic freedom and a judgment of the candidate on completely irrelevant grounds. Of course, it is natural to want the personalities of our colleagues to mesh well with our own, and almost impossible to completely ignore supposedly extraneous information. But we are hiring for academic jobs, and what should matter are research and teaching ability.

Of course, I’ve been lucky: I already had a permanent job when I started blogging, and I work in the UK system which doesn’t have a tenure review process. And I admit this blog has steered clear of truly controversial topics (depending on what you think of Bayesian probability, at least).

April 13, 2014

Clifford JohnsonParticipatory Art!

As you know I am a big fan of sketching, and tire easily of the remark people make that they "can't draw" - an almost meaningless thing society trains most people to think and say, with the result that they miss out on a most wonderful part of life. Sketching is a wonderful way of slowing down and really looking at the world around you and recording your impressions. If you can write you certainly can draw. It is a learned skill, and the most important part of it is learning to look*. But anyway, I was pleased to see this nice way of getting people of all ages involved in sketching for fun at the book festival! So I reached in and grabbed a photo for you. artists_row_latfob_12th_april_2014 [...] Click to continue reading this post

Clifford JohnsonYoung Author Meeting!

20140413-110946.jpgIt is nice to see the variety of authors at a book fair event like this one, and it's great to see people's enthusiasm about meeting people who've written works they've spent a lot of time with. The long lines for signings are remarkable! As you might guess, I'm very much a supporter of the unsung authors doing good work in their own small way, not anywhere near the spotlight. An interesting booth caught my notice as I was wandering... The word "science" caught my eye. Seems that a mother and daughter team wrote a science book to engage children to become involved in science... Hurrah! So Jalen Langie (the daughter, amusingly wearing a lab coat) gets to be [...] Click to continue reading this post

David Hoggred giants as clocks

Lars Bildsten (KITP) was in town and gave two talks today. In the first, he talked about super-luminous supernovae, and how they might be powered by the spin-down of the degenerate remnant, when spin-down times and diffusion times become comparable. In the second, he talked about making precise inferences about giant stars from Kepler and COROT photometry. The photometry shows normal modes and mode splittings, which are sensitive to the run of density in the giants; this in turn constrains what fraction of the star has burned to helium. There is a lot of interesting unexplained phenomenology related to the spin of the stellar core, which remains a puzzle. There was much more in the talk as well, but one thing that caught my interest is that some of the modes are exceedingly high in quality factor or coherence. That is, giants look like very good clocks. A discussion broke out at the end about whether or not we could use these clocks to constrain, detect, or measure gravitational radiation. Each star is much worse than a radio pulsar, but there are far, far more of them available for use. Airplane project!

David Hoggprobabilistic halo mass inference

In a low-research day, at lunch, Kilian Walsh pitched to Fadely and me a project to infer galaxy host halo masses from galaxy positions and redshifts. We discussed some of the issues and previous work. I am out of the loop, so I don't know the current literature. But I am sure there is interesting work that can be done, and it would be fun to combine galaxy kinematic information with weak lensing, strong lensing, x-ray, and SZ effect data.

Doug NatelsonEnd of an era.

As long as we're talking about the (alleged) end of science, look at this picture (courtesy of Don Monroe).  This is demolition work being done in Murray Hill, NJ, as Alcatel-Lucent takes down a big hunk of Building 1 of Bell Labs. 


This building and others at the site were the setting for some of the most important industrial research of the 20th century.  (Before people ask, the particular lab where the transistor was first made is not being torn down here.)  I've written before about the near-demise of long-term basic research in the industrial setting in the US.  While Bell Labs still exists, this, like the demise of the Holmdel site, are painful marks of the end of an era.

Chad OrzelUnion College Hockey, NCAA Champions

One of the weird quirks of Union college, where I teach, is that the hockey teams compete in the NCAA’s Division I, something that doesn’t usually happen for a school with only 2200 students. That might seem like a ridiculously terrible idea, but last night, it worked surprisingly well: Union beat perennial hockey power Minnesota for the NCAA National Championship. It was an amazing game

I’m not going to pretend like I’m a huge fan– Kate and I watched on tv, the only hockey I’ve watched all season– and I’m certainly not going to use first-person pronouns to talk about it (I really hate that…). I had nothing to do with this– I’m fairly sure I’ve never even had any current hockey players in class. This is all theirs, the product of lots of hard work.

But this is a huge deal for all the students. We even got a few glimpses of some physics majors on ESPN– two of them, including one of my research students, play in the pep band. Pretty much all of them will remember this forever as one of the biggest things to happen in their time in college. And that’s worth celebrating and acknowledging.

So congratulations to the hockey team, some of whom celebrated in the most 2014 way imaginable, as seen in the “featured image” up top. The guys in hockey gear are Matt Bodie, Shayne Gostisbehere, and Daniel Carr, three of Union’s standout players– Gostisbehere in particular played an amazing game, and was trending on Twitter at one point, which is a miracle given his surname. The dude in the suit taking the selfie is ESPN’s John Buccigross (the resulting photo is here). This isn’t a you-kids-get-your-selfies-off-my-lawn thing, by the way– I think it’s pretty funny, and kind of awesome.

2014, ladies and gentlemen, and the Union College Dutchmen, NCAA Division I National Champions.

Clifford JohnsonWhat Books Inspire…

I looked for D-Branes, by Clifford V. Johnson... But somehow must have missed it... 20140412-142437.jpg -cvj Click to continue reading this post

Geraint F. LewisGravitational lensing in WDM cosmologies: The cross section for giant arcs

We've had a pretty cool paper accepted for publication in the Monthly Notices of the Royal Astronomical Society  which tackles a big question in astronomy, namely what is the temperature of dark matter. Huh, you might say "temperature", what do you mean by "temperature"? I will explain.

The paper is by Hareth Mahdi, a PhD student at the Sydney Institute for Astronomy. Hareth's expertise is in gravitational lensing, using the huge amounts of mass in galaxy clusters to magnify the view of the distant Universe. Gravitational lenses are amongst the most beautiful things in all of astronomy. For example:
Working out how strong the lensing effect is reveals the amount of mass in the cluster, showing that there is a lot of dark matter present.

Hareth's focus is not "real" clusters, but clusters in "synthetic" universes, universes we generate inside supercomputers. The synthetic universes look as nice as the real ones; here's one someone made earlier (than you Blue Peter).

 Of course, in a synthetic universe, we control everything, such as the laws of physics and the nature of dark matter.

Dark matter is typically treated as being cold, meaning that the particles that make up dark matter move at speeds much lower than the speed to light. But we can also consider hot dark matter, which travels at speeds close to the speed of light, or warm dark matter, which moves at speeds somewhere in between.

What's the effect of changing the temperature of dark matter? Here's an illustration
With cold at the top, warmer in the middle, and hottest at the bottom. And what you can see is that as we wind up the temperature, the small scale structure in the cluster gets washed out. Some think that warm dark matter might be the solution to missing satellite problem.

Hareth's had two samples of clusters, some from cold dark matter universes and some from warm, and he calculated the strength of gravitational lensing in both. The goal is to see if changing to warm dark matter can help fix another problem in astronomy, namely that the clusters we observe seem to be more efficient at producing lensed images than the ones we have in our simulated universes.

We can get some pictures of the lensing strengths of these clusters, which looks like this
This shows the mass distributions in cold dark matter universes, with a corresponding cluster in the warm dark matter universe. Because the simulations were set up with similar initial conditions, these are the same clusters seen in the two universe.

You can already see that there are some differences, but what about lensing efficiency? There are a few ways to characterise this, but one way is the cross-section to lensing. When we compare the two cosmologies, we get the following:

There is a rough one-to-one relationship, but notice that the warm dark matter clusters sit mainly above the black line. This means that the warm dark matter clusters are more efficient at lensing than their cold dark matter colleagues.

This is actually an unexpected result. Naively, we would expect warm dark matter to remove structure and make clusters puffy, and hence less efficient at lensing. So what is happening?

It took a bit of detective work, but we tracked it down. Yes, in warm dark matter clusters, the small scale structure is wiped out, but where does the mass go? It actually goes in to the larger mass halo, making them more efficient at lensing. Slightly bizarre, but it does mean that we have a way, if we can measure enough real clusters, it could give us a test of the temperature of dark matter!

But alas, even though the efficiency is stronger with warm dark matter, it is not strong enough to fix the lensing efficiency problem. As ever, there is more work to do, and I'll report it here.

Until then, well done Hareth!

Gravitational lensing in WDM cosmologies: The cross section for giant arcs

The nature of the dark sector of the Universe remains one of the outstanding problems in modern cosmology, with the search for new observational probes guiding the development of the next generation of observational facilities. Clues come from tension between the predictions from {\Lambda}CDM and observations of gravitationally lensed galaxies. Previous studies showed that galaxy clusters in the {\Lambda}CDM are not strong enough to reproduce the observed number of lensed arcs. This work aims to constrain the warm dark matter cosmologies by means of the lensing efficiency of galaxy clusters drawn from these alternative models. The lensing characteristics of two samples of simulated clusters in the warm dark matter ({\Lambda}WDM) and cold dark matter ({\Lambda}CDM) cosmologies have been studied. The results show that even though the CDM clusters are more centrally concentrated and contain more substructures, the WDM clusters have slightly higher lensing efficiency than their CDM counterparts. The key difference is that WDM clusters have more extended and more massive subhaloes than CDM analogues. These massive substructures significantly stretch the critical lines and caustics and hence they boost the lensing efficiency of the host halo. Despite the increase in the lensing efficiency due to the contribution of massive substructures in the WDM clusters, this is not enough to resolve the arc statistics problem.

April 12, 2014

Jordan EllenbergJordan and the Dream of Rogen

The other night I dreamed I was going into a coffeeshop and Seth Rogen was sitting at an outside table eating a salad.  He was wearing a jeans jacket and his skin was sort of bad.  I have always admired Rogen’s work so I screwed up my courage, went up to his table and said

“Are you…”

And he said, “Yes, I am… having the chef’s salad.  You should try it, it’s great.”

And I sort of stood there and goggled and then he was like, “Yeah, no, yes, I’m Seth Rogen.”

I feel proud of my unconscious mind for producing what I actually consider a reasonably Seth Rogen-style gag!


Steinn SigurðssonHubble Plateaus

In times past we have lovingly tracked the proposal frenzy as the near annual Hubble Space Telescope proposal deadline approaches.

As was noted by Julianne several years ago, and confirmed over the last half dozen cycles, the shape of the curve of number of submitted proposals as a function of time until the deadline is nearly invariant.
Interestingly, the total number of proposals also does not change much, some dips and spikes with the loss and availability of instruments, but the total is near stationary and some measure of the statistical saturation of the ability of astronomers to put together coherent proposals in a finite time.

Anyway, here is this year’s lot, culled from a small sample of fb posts:

hst14

Same-same…

@astronomolly kept it real on twitter in the run up

April 11, 2014

Scott AaronsonIs There Anything Beyond Quantum Computing?

So I’ve written an article about the above question for PBS’s website—a sort of tl;dr version of my 2005 survey paper NP-Complete Problems and Physical Reality, but updated with new material about the simulation of quantum field theories and about AdS/CFT.  Go over there, read the article (it’s free), then come back here to talk about it if you like.  Thanks so much to Kate Becker for commissioning the article.

In other news, there’s a profile of me at MIT News (called “The Complexonaut”) that some people might find amusing.

Oh, and anyone who thinks the main reason to care about quantum computing is that, if our civilization ever manages to surmount the profound scientific and technological obstacles to building a scalable quantum computer, then that little padlock icon on your web browser would no longer represent ironclad security?  Ha ha.  Yeah, it turns out that, besides factoring integers, you can also break OpenSSL by (for example) exploiting a memory bug in C.  The main reason to care about quantum computing is, and has always been, science.

Doug NatelsonJohn Horgan: Same old, same old.

John Horgan writes about science for National Geographic.  You may remember him from his book, The End of Science.   His thesis, 17 years ago, was that science is basically done - there just aren't going to be too many more profound discoveries, particularly in physics, because we've figured it all out and the rest is just details.  Well, I'll give him this for consistency:  He's still flogging this dead horse 17 years later, as seen in his recent column.  I disagree with his point of view.  Even if you limit yourself to physics, there are plenty of discoveries left to be made for a long time to come - things only look bleak if (a) you're only a reductionist; and (b) you limit your interest in physics to a narrow range of topics.  In other words, possibly looking for supersymmetric partners at the LHC might not be a great bet, but that doesn't mean that all of science is over.

April 10, 2014

David Hoggpermitted kernel functions, cosmology therewith

I spent a while at the group meeting of applied mathematician Leslie Greengard (NYU, Simons Foundation), telling the group how cosmology is done, and then how it might be done if we had some awesome math foo. In part we got on to how you could make a non-parametric kernel function for a Gaussian Process for the matter density field at late times, given that you need to stay non-negative definite. Oh wait, I mean positive semi-definite. Oh the things you learn! Anyway, it turns out that this is not really a solved problem and possibly a project was born. Hope so! I would love to recreate our discovery of the baryon acoustic feature with proper inference. At the group meeting, Foreman-Mackey and I had an "aha moment" about Ambikasaran et al's method for solving and taking the determinants of kernel matrices (Siva Ambikasaran (NYU) was in attendance), and then spent the post-group-meeting lunch in part quizzing Mike O'Neil (NYU) about how to structure our code to work fast in the three-dimensional case (the cosmology case).

n-Category Café The Modular Flow on the Space of Lattices

Guest post by Bruce Bartlett

The following is the greatest math talk I’ve ever watched!

  • Etienne Ghys (with pictures and videos by Jos Leys), Knots and Dynamics, ICM Madrid 2006. [See below the fold for some links.]

Etienne Ghys A modular knot

I wasn’t actually at the ICM; I watched the online version a few years ago, and the story has haunted me ever since. Simon and I have been playing around with some of this stuff, so let me share some of my enthusiasm for it!

The story I want to tell here is how, via modular flow of lattices in the plane, certain matrices in SL(2,)\SL(2,\mathbb{Z}) give rise to knots in the 3-sphere less a trefoil knot. Despite possibly sounding quite scary, this can be easily explained in an elementary yet elegant fashion.

As promised above, here are some links related to Ghys’ ICM talk.

I’m going to focus on the last third of the talk — the modular flow on the space of lattices. That’s what produced the beautiful picture above (credit for this and other similar pics below goes to Jos Leys; the animation is Simon’s.)

Lattices in the plane

For us, a lattice is a discrete subgroup of \mathbb{C}. There are three types: the zero lattice, the degenerate lattices, and the nondegenerate lattices:

Lattices

Given a lattice LL and an integer n4n \geq 4 we can calculate a number — the Eisenstein series of the lattice: G n(L)= ωL,ω01ω n. G_{n}(L) = \sum _{\omega \in L, \omega \neq 0} \frac{1}{\omega ^{n}}. We need n3n \geq 3 for this sum to converge. For, roughly speaking, we can rearrange it as a sum over rr of the lattice points on the boundary of a square of radius rr. The number of lattice points on this boundary scales with rr, so we end up computing something like r0rr n\sum _{r \geq 0} \frac{r}{r^{n}} and so we need n3n \geq 3 to make the sum converge.

Note that G n(L)G_{n}(L) = 0 for nn odd since every term ω\omega is cancelled by the opposite term ω-\omega . So, the first two nontrivial Eisenstein series are G 4G_{4} and G 6G_{6}. We can use them to put `Eisenstein coordinates’ on the space of lattices.

Theorem: The map {lattices} 2 L (G 4(L),G 6(L)) \begin{aligned} \{ \text{lattices} \} &\rightarrow \mathbb{C}^{2} \\ L & \mapsto (G_{4} (L), \, G_{6}(L)) \end{aligned} is a bijection.

The nicest proof is in Serre’s A Course in Arithmetic, p. 89. It is a beautiful application of the Cauchy residue theorem, using the fact that G 4G_{4} and G 6G_{6} define modular forms on the upper half plane HH. (Usually, number theorists set up their lattices so that they have basis vectors 11 and τ\tau where τH\tau \in H. But I want to avoid this ‘upper half plane’ picture as far as possible, since it breaks symmetry and mystifies the geometry. The whole point of the Ghys picture is that not breaking the symmetry reveals a beautiful hidden geometry! Of course, sometimes you need the ‘upper half plane’ picture, like in the proof of the above result.)

Lemma: The degenerate lattices are the ones satisfying 20G 4 349G 6 2=020 G_{4}^{3} - 49G_{6}^{2} = 0.

Let’s prove one direction of this lemma — that the degenerate lattices do indeed satisfy this equation. To see this, we need to perform a computation. Let’s calculate G 4G_{4} and G 6G_{6} of the lattice \mathbb{Z} \subset \mathbb{C}. Well, G 4()= n01n 4=2ζ(4)=2π 490 G_{4}(\mathbb{Z}) = \sum _{n \neq 0} \frac{1}{n^{4}} = 2 \zeta (4) = 2 \frac{\pi ^{4}}{90} where we have cheated and looked up the answer on Wikipedia! Similarly, G 6()=2π 6945G_{6}(\mathbb{Z}) = 2 \frac{\pi ^{6}}{945}.

So we see that 20G 4() 349G 6() 2=020 G_{4}(\mathbb{Z})^{3} - 49 G_{6}(\mathbb{Z})^{2} = 0. Now, every degenerate lattice is of the form tt \mathbb{Z} where tt \in \mathbb{C}. Also, if we transform the lattice via LtLL \mapsto t L, then G 4t 4G 4G_{4} \mapsto t^{-4} G_{4} and G 6t 6G 6G_{6} \mapsto t^{-6} G_{6}. So the equation remains true for all the degenerate lattices, and we are done.

Corollary: The space of nondegenerate lattices in the plane of unit area is homeomorphic to the complement of the trefoil in S 3S^{3}.

The point is that given a lattice LL of unit area, we can scale it LλLL \mapsto \lambda L, λ +\lambda \in \mathbb{R}^{+} until (G 4(L),G 6(L))(G_{4}(L), G_{6}(L)) lies on the 3-sphere S 3={(z,w):|z| 2+|w| 2=1} 2S^{3} = \{ (z,w) : |z|^{2} + |w|^{2} = 1\} \subset \mathbb{C}^{2}. And the equation 20z 349w 2=020 z^{3} - 49 w^{2} = 0 intersected with S 3S^{3} cuts out a trefoil knot… because it is “something cubed plus something squared equals zero”. And the lemma above says that the nondegenerate lattices are precisely the ones which do not satisfy this equation, i.e. they represent the complement of this trefoil.

Since we have not divided out by rotations, but only by scaling, we have arrived at a 3-dimensional picture which is very different to the 2-dimensional moduli space (upper half-plane divided by SL(2,)\SL(2,\mathbb{Z})) picture familiar to a number theorist.

The modular flow

There is an intriguing flow on the space of lattices of unit area, called the modular flow. Think of LL as sitting in 2\mathbb{R}^{2}, and then act on 2\mathbb{R}^{2} via the transformation (e t 0 0 e t), \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ), dragging the lattice LL along for the ride. (This isn’t just some formula we pulled out the blue — geometrically this is the ‘geodesic flow on the unit tangent bundle of the modular orbifold’.)

We are looking for periodic orbits of this flow.

“Impossible!” you say. “The points of the lattice go off to infinity!” Indeed they do… but disregard the individual points. The lattice itself can ‘click’ back into its original position:

animation

How are we to find such periodic orbits? Start with an integer matrix A=(a b c d)SL(2,) A = \left ( \begin{array}{cc} a & b \\ c & d \end{array}\right ) \in \SL(2, \mathbb{Z}) and assume AA is hyperbolic, which simply means |a+d|2|a + d| \geq 2. Under these conditions, we can diagonalize AA over the reals, so we can find a real matrix PP such that PAP 1=±(e t 0 0 e t) P A P^{-1} = \pm \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) for some tt \in \mathbb{R}. Now set LP( 2)L \coloneqq P(\mathbb{Z}^{2}). We claim that LL is a periodic orbit of period tt. Indeed: L t =(e t 0 0 e t)P( 2) =±PA( 2) =±P( 2) =L. \begin{aligned} L_{t} &= \left ( \begin{array}{cc} e^{t} & 0 \\ 0 & e^{-t} \end{array} \right ) P (\mathbb{Z}^{2}) \\ &= \pm PA (\mathbb{Z}^{2}) \\ &= \pm P (\mathbb{Z}^{2}) \\ &= L. \end{aligned} We have just proved one direction of the following.

Theorem: The periodic orbits of the modular flow are in bijection with the conjugacy classes of hyperbolic elements in SL(2,)\SL(2, \mathbb{Z}).

These periodic orbits produce fascinating knots in the complement of the trefoil! In fact, they link with the trefoil (the locus of degenerate lattices) in fascinating ways. Here are two examples, starting with different matrices ASL(2,)A \in \SL(2, \mathbb{Z}).

animation

The trefoil is the fixed orange curve, while the periodic orbits are the red and green curves respectively.

Ghys proved the following two remarkable facts about these modular knots.

  • The linking number of a modular knot with the trefoil of degenerate lattices equals the Rademacher function of the corresponding matrix in SL(2,)\SL(2, \mathbb{Z}) (the change in phase of the Dedekind eta function).
  • The knots occuring in the modular flow are the same as those occuring in the Lorenz equations!

Who would have thought that lattices in the plane could tell the weather!!

I must say I have thought about many aspects of these closed geodesics, but it had never crossed my mind to ask which knots are produced. – Peter Sarnak

Jordan EllenbergMath blog roundup

Lots of good stuff happening in math blogging!


April 09, 2014

n-Category Café Operads and Trees

Nina Otter is a master’s student in mathematics at ETH Zürich who has just gotten into the PhD program at Oxford. She and I are writing a paper on operads and the tree of life.

Anyone who knows about operads knows that they’re related to trees. But I’m hoping someone has proved some precise theorems about this relationship, so that we don’t have to.

By operad I’ll always mean a symmetric topological operad. Such a thing has an underlying ‘symmetric collection’, which in turn has an underlying ‘collection’. A collection is just a sequence of topological spaces O nO_n for n0n \ge 0. In a symmetric collection, each space O nO_n has an action of the symmetric group S nS_n.

I’m hoping that someone has already proved something like this:

Conjecture 1. The free operad on the terminal symmetric collection has, as its space of nn-ary operations, the set of rooted trees having some of their leaves labelled {1,,n}\{1, \dots, n\}.

Conjecture 2. The free operad on the terminal collection has, as its space of nn-ary operations, the set of planar rooted trees having some of their leaves labelled {1,,n}\{1, \dots, n\}.

Calling them ‘conjectures’ makes it sound like they might be hard — but they’re not supposed to be hard. If they’re right, they should be easy and in some sense well-known! But there are various slightly different concepts of ‘rooted tree’ and ‘rooted planar tree’, and we have to get the details right to make these conjectures true. For example, a graph theorist might draw a rooted planar tree like this:

while an operad theorist might draw it like this:

If the conjectures are right, we can use them to define the concepts of ‘rooted tree’ and ‘rooted planar tree’, thus side-stepping these details. And having purely operad-theoretic definitions of ‘tree’ and ‘rooted tree’ would make it a lot easier to use these concepts in operad theory. That’s what I want to do, ultimately. But proving these conjectures, and of course providing the precise definitions of ‘rooted tree’ and ‘rooted planar tree’ that make them true, would still be very nice.

And it would be even nicer if someone had already done this. So please provide references… and/or correct mistakes in the following stuff!

Rooted Trees

Definition. For any natural number n=0,1,2,n = 0, 1, 2, \dots, an nn-tree is a quadruple T=(V,E,s,t)T=(V,E,s,t) where:

  • VV is a finite set whose elements are called internal vertices;
  • EE is a finite non-empty set whose elements are called edges;
  • s:EV{1,,n}s: E \to V\sqcup \{1,\dots, n\} and t:EV{0}t: E \to V \sqcup\{0\} are maps sending any edge to its source and target, respectively.

Given u,vV{0}{1,,n}u,v\in V \sqcup\{0\} \sqcup \{1,\dots, n\}, we write uevu \stackrel{e}{\longrightarrow} v if eEe\in E is an edge such that s(e)=us(e)=u and t(e)=vt(e)=v.

This data is required to satisfy the following conditions:

  • s:EV{1,,n}s : E \to V\sqcup \{1,\dots, n\} is a bijection;
  • there exists exactly one eEe\in E such that t(e)=0t(e)=0;
  • for any vV{1,,n}v \in V\sqcup\{1,\dots , n\} there exists a directed edge path from vv to 00: that is, a sequence of edges e 0,,e ne_0, \dots, e_n and vertices v 1,,v nv_1 , \dots , v_n such that ve 0v 1,v 1e 1v 2,,v ne n0. v \stackrel{e_0}{\longrightarrow} v_1 , \; v_1 \stackrel{e_1}{\longrightarrow} v_2, \; \dots, \; v_n \stackrel{e_n}{\longrightarrow} 0 .

So the idea is that our tree has V{0}{1,,n} V \sqcup\{0\} \sqcup \{1,\dots, n\} as its set of vertices. There could be lots of leaves, but we’ve labelled some of them by numbers 1,,n1, \dots, n. In our pictures, the source of each edge is at its top, while the target is at bottom.

There is a root, called 00, but also a ‘pre-root’: the unique vertex with an edge going from it to 00. I’m not sure I like this last bit, and we might be able to eliminate this redundancy, but it’s built into the operad theorist’s picture here:

It might be a purely esthetic issue. Like everything else, it gets a bit more scary when we consider degenerate special cases.

I’m hoping there’s an operad TreeTree whose set of nn-ary operations, Tree nTree_n, consists of isomorphism classes of nn-trees as defined above. I’m hoping someone has already proved this. And I hope someone has characterized this operad TreeTree in a more algebraic way, as follows.

Definition. A symmetric collection CC consists of topological spaces {C n} n0\{C_n\}_{n \ge 0} together with a continuous action of the symmetric group S nS_n on each space C nC_n. A morphism of symmetric collections f:CCf : C \to C' consists of an S nS_n-equivariant continuous map f n:C nC nf_n : C_n \to C'_n for each n0n \ge 0. Symmetric collections and morphisms between them form a category STop{STop}.

(More concisely, if we denote the groupoid of sets of the {1,,n}\{1, \dots, n\} and bijections between these as SS, then STopSTop is the category of functors from SS to TopTop.)

There is a forgetful functor from operads to symmetric collections

U:OpSTop U : Op \to STop

with a left adjoint

F:STopOp F: STop \to Op

assigning to each symmetric collection the operad freely generated by it.

Definition. Let CommComm be the terminal operad: that is, the operad, unique up to isomorphism, such that Comm nComm_n is a 1-element set for each n0n \ge 0.

The algebras of Comm\Comm are commutative topological monoids.

Conjecture 1. There is a unique isomorphism of operads

ϕ:F(U(Comm))Tree \phi : F (U (Comm)) \stackrel{\sim}{\longrightarrow} Tree

that for each n0n \ge 0 sends the unique nn-ary operation in CommComm to the corolla with nn leaves: that is, the isomorphism class of nn-trees with no internal vertices.

(When I say “the unique nn-ary operation in CommComm”, but treating it as an operation in F(U(Comm))F(U(Comm)), I’m using the fact that the unique operation in Comm n\Comm_n gives an element in U(Comm) nU(\Comm)_n, and thus an operation in F(U(Comm)) nF(U(\Comm))_n.)

Planar Rooted Trees

And there should be a similar result relating planar rooted trees to collections without symmetric group actions!

Definition. A planar nn-tree is an nn-tree in which each internal vertex vv is equipped with a linear order on the set of its children, i.e. the set t 1(v)t^{-1}(v).

I’m hoping someone has constructed an operad PTreePTree whose set of nn-ary operations, PTree nPTree_n, consists of isomorphism classes of planar nn-trees. And I hope someone has characterized this operad PTreePTree as follows:

Definition. A collection CC consists of topological spaces {C n} n0\{C_n\}_{n \ge 0}. A morphism of collections f:CCf :C \to C' consists of a continuous map f n:C nC nf_n : C_n \to C'_n for each n0n \ge 0. Collections and morphisms between them form a category Top\mathbb{N}Top.

(If we denote the category with natural numbers as objects and only identity morphisms between as \mathbb{N}, then Top\mathbb{N}Top is the category of functors from \mathbb{N} to TopTop.)

There is a forgetful functor

ϒ:OpTop \Upsilon : Op \to \mathbb{N}Top

with a left adjoint

Φ:TopOp \Phi : \mathbb{N}Top \to Op

assigning to each collection the operad freely generated by it. This left adjoint is the composite

TopGSTopFOp \mathbb{N}Top \stackrel{G}{\longrightarrow} STop \stackrel{F}{\longrightarrow} Op

where the first functor freely creates a symmetric collection G(C)G(C) from a collection CC by setting G(C) n=S n×C nG(C)_n = S_n \times C_n, and the second freely generates an operad from a symmetric collection, as described above.

Conjecture 2. There is a unique isomorphism of operads

ψ:Φ(ϒ(Comm))PTree \psi : \Phi(\Upsilon (Comm)) \stackrel{\sim}{\longrightarrow} PTree

that for each n0n \ge 0 sends the unique nn-ary operation in CommComm to the corolla with nn leaves ordered so that 1<<n1 &lt; \cdots &lt; n.

Have you seen a proof of this stuff???

Andrew JaffeSpring & Summer Science

As the academic year winds to a close, scientists’ thoughts turn towards all of the warm-weather travel ahead (in order to avoid thinking about exam marking). Mostly, that means attending scientific conferences, like the upcoming IAU Symposium, Statistical Challenges in 21st Century Cosmology in Lisbon next month, and (for me and my collaborators) the usual series of meetings to prepare for the 2014 release of Planck data. But there are also opportunities for us to interact with people outside of our technical fields: public lectures and festivals.

Next month, parallel to the famous Hay Festival of Literature & the Arts, the town of Hay-on-Wye also hosts How The Light Gets In, concentrating on the also-important disciplines of philosophy and music, with a strong strand of science thrown in. This year, along with comic book writer Warren Ellis, cringe-inducing politicians like Michael Howard and George Galloway, ubiquitous semi-intellectuals like Joan Bakewell, there will be quite a few scientists, with a skew towards the crowd-friendly and controversial. I’m not sure that I want to hear Rupert Sheldrake talk about the efficacy of science and the scientific method, although it might be interesting to hear Julian Barbour, Huw Price, and Lee Smolin talk about the arrow of time. Some of the descriptions are inscrutable enough to pique my interest: Nancy Cartwright and George Ellis will discuss “Ultimate Proof” — I can’t quite figure out if that means physics or epistemology. Perhaps similarly, chemist Peter Atkins will ask “Can science explain all of existence” (and apparently answer in the affirmative). Closer to my own wheelhouse, Roger Penrose, Laura Mersini-Houghton, and John Ellis will discuss whether it is “just possible the Big Bang will turn out to be a mistake”. Penrose was and is one of the smartest people to work out the consequences of Einstein’s general theory of relativity, though in the last few years his cosmological musings have proven to be, well, just plain wrong — but, as I said, controversial and crowd-pleasing… (Disclosure: someone from the festival called me up and asked me to write about it here.)

Alas, I’ll likely be in Lisbon, instead of Hay. But if you want to hear me speak, you can make your way up North to Grantham, where Isaac Newton was educated, for this year’s Gravity Fields festival in late September. The line-up isn’t set yet, but I’ll be there, as will my fellow astronomers Chris Lintott and Catherine Heymans and particle physicist Val Gibson, alongside musicians, dancers, and lots of opportunities to explore the wilds of Lincolnshire. Or if you want to see me before then (and prefer to stay in London), you can come to Imperial for my much-delayed Inaugural Professorial Lecture on May 21, details TBC…

Tommaso DorigoWhat Next ?

Yesterday I was in Rome, at a workshop organized by the Italian National Institute for Nuclear Physics (INFN), titled "What Next". The event was meant to discuss the plan for basic research in fundamental physics and astrophysics beyond the next decade or so, given the input we have and the input we might collect in the next few years at accelerators and other facilities.

read more

David Hoggfit all your streams, gamma-Earth

I spoke with Kathryn Johnston's group by phone for a long time at midday, about the meeting last week at Oxford. I opined that "the competition" is going to stick with integrable orbits for a while, so we can occupy the niche of more general potentials and orbit families. We discussed at some length the disagreement between Sanders (Oxford) and Bovy about how and why streams are different from orbits. Towards the end of that meeting, we discussed Price-Whelan's PhD projects, which he wants to include a balance of theory and real-data inference. I argued strongly that Price-Whelan should follow the Branimir Sesar (MPIA) "plan" which is to fit all the known streams and use those fits to figure out what observations are most crucial to do next. Plus maybe some theory.

In the afternoon, Foreman-Mackey and I discussed figures and content for his "gamma-Earth" paper (not "eta-Earth" but "gamma-Earth"). We decided to choose a fiducial model, work that through completely, and show all the other things we know as adjustments to that fiducial model. We also discussed how to show everything on one big figure (which would be great, for talks and the paper). Foreman-Mackey told me that the Tremaine papers on planet occurrence get the likelihood function for the variable-rate Poisson problem correct (including overall normalization); our only "advances" relative to the Tremaine papers are that we have a more flexible functional form for the rate function and its prior, and we fully account for the observational uncertainties (which basically no-one knows how to do at this point).

April 08, 2014

Mark Chu-CarrollThe Heartbleed Bug

There’s a lot of panic going around on the internet today, about something called the Heartbleed bug. I’ve gotten questions, so I’m giving answers.

I’ve heard lots of hype. Is this really a big deal?

Damn right it is!

It’s pretty hard to wrap your head around just how bad this actually is. It’s probably even more of a big deal that the hype has made it out to be!

This bug affects around 90% of all sites on the internet that use secure connections. Seriously: if you’re using the internet, you’re affected by this. It doesn’t matter how reputable or secure the sites you connect to have been in the past: the majority of them are probably vulnerable to this, and some number of them have, in all likelihood, been compromised! Pretty much any website running on linux or netbst, using Apache or NGINX as its webserver is vulnerable. That means just about every major site on the net.

The problem is a bug in a commonly used version of SSL/TLS. So, before I explain what the bug is, I’ll run through a quick background.

What is SSL/TLS?

When you’re using the internet in the simplest mode, you’re using a simple set of communication protocols called TCP/IP. In basic TCP/IP protocols, when you connect to another computer on the network, the data that gets sent back and forth is not encrypted or obscured – it’s just sent in the clear. That means that it’s easy for anyone who’s on the same network cable as you to look at your connection, and see the data.

For lots of things, that’s fine. For example, if you want to read this blog, there’s nothing confidential about it. Everyone who reads the blog sees the same content. No one is going to see anything private.

But for a lot of other things, that’s not true. You probably don’t want someone to be able to see your email. You definitely don’t want anyone else to be able to see the credit card number you use to order things from Amazon!

To protect communications, there’s another protocol called SSL, the Secure Sockets Layer. When you connect to another site that’s got a URL starting with https:, the two computers establish an encrypted connection. Once an SSL connection is established between two computers, all communication between them is encrypted.

Actually, on most modern systems, you’re not really using SSL. You’re using a successor to the original SSL protocol called TLS, which stands for transport layer security. Pretty much everyone is now using TLS, but many people still just say SSL, and in fact the most commonly used implementation of it is in package called OpenSSL.

So SSL/TLS is the basic protocol that we use on the internet for secure communications. If you use SSL/TLS correctly, then the information that you send and receive can only be accessed by you and the computer that you’re talking to.

Note the qualifier: if you use SSL correctly!

SSL is built on public key cryptography. What that means is that a website identifies itself using a pair of keys. There’s one key, called a public key, that it gives away to everyone; and there’s a second key, called a private key, that it keeps secret. Anything that you encrypt with the public key can only be decrypted using the private key; anything encrypted with the private key can only be decrypted using the public key. That means that if you get a message that can be decrypted using the sites public key, you know that no one except the site could have encrypted it! And if you use the public key to encrypt something, you know that no one except that site will be able to decrypt it.

Public key cryptography is an absolutely brilliant idea. But it relies on the fact that the private key is absolutely private! If anyone else can get a copy of the private key, then all bets are off: you can no longer rely on anything about that key. You couldn’t be sure that messages came from the right source; and you couldn’t be sure that your messages could only be read by an authorized person.

So what’s the bug?

The SSL protocol includes something called a heartbeat. It’s a periodic exchange between the two sides of a connection, to let them know that the other side is still alive and listening on the connection.

One of the options in the heartbeat is an echo request, which is illustrated below. Computer A wants to know if B is listening. So A sends a message to B saying “Here’s X bytes of data. Send them back to me.” Then A waits. If it gets a message back from B containing the same X bytes of data, it knows B was listening. That’s all there is to it: the heartbeat is just a simple way to check that the other side is actually listening to what you say.

heartbeat

The bug is really, really simple. The attacker sends a heartbeat message saying “I’m gonna send you a chunk of data containing 64000 bytes”, but then the data only contains one byte.

If the code worked correctly, it would say “Oops, invalid request: you said you were going to send me 64000 bytes of data, but you only sent me one!” But what the buggy version of SSL does is send you that 1 byte, plus 63,999 bytes of whatever happens to be in memory next to wherever it saved the byte..

heartbleed

You can’t choose what data you’re going to get in response. It’ll just be a bunch of whatever happened to be in memory. But you can do it repeatedly, and get lots of different memory chunks. If you know how the SSL implementation works, you can scan those memory chunks, and look for particular data structures – like private keys, or cookies. Given a lot of time, and the ability to connect multiple times and send multiple heartbeat requests each time you connect, you can gather a lot of data. Most of it will be crap, but some of it will be the valuable stuff.

To make matters worse, the heartbeat is treated as something very low-level which happens very frequently, and which doesn’t transfer meaningful data. So the implementation doesn’t log heartbeats at all. So there’s no way of even identifying which connections to a server have been exploiting this. So a site that’s running one of the buggy versions of OpenSSL has no way of knowing whether or not they’ve been the target of this attack!

See what I mean about it being a big deal?

Why is it so widespread?

When I’ve written about security in the past, one of the things that I’ve said repeatedly is: if you’re thinking about writing your own implementation of a security protocol, STOP. Don’t do it! There are a thousand ways that you can make a tiny, trivial mistake which completely compromises the security of your code. It’s not a matter of whether you’re smart or not; it’s just a simple statement of fact. If you try to do it, you will screw something up. There are just so many subtle ways to get things wrong: it takes a whole team of experts to even have a chance to get it right.

Most engineers who build stuff for the internet understand that, and don’t write their own cryptosystems or cryptographic protocols. What they do is use a common, well-known, public system. They know the system was implemented by experts; they know that there are a lot of responsible, smart people who are doing their best to find and fix any problems that crop up.

Imagine that you’re an engineer picking an implementation of SSL. You know that you want as many people trying to find problems as possible. So which one will you choose? The one that most people are using! Because that’s the one that has the most people working to make sure it doesn’t have any problems, or to fix any problems that get found as quickly as possible.

The most widely used version of SSL is an open-source software package called OpenSSL. And that’s exactly where the most bug is: in OpenSSL.

How can it be fixed?

Normally, something like this would be bad. But you’d be able to just update the implementation to a new version without the bug, and that would be the end of the problem. But this case is pretty much the worst possible case: fixing the implementation doesn’t entirely fix the problem. Because even after you’ve fixed the SSL implementation, if someone got hold of your private key, they still have it. And there’s no way to know if anyone stole your key!

To fix this, then, you need to do much more than just update the SSL implementation. You need to cancel all of your keys, and replace them new ones, and you need to get everyone who has a copy of your public key to throw it away and stop using it.

So basically, at the moment, every keypair for nearly every major website in the world that was valid yesterday can no longer be trusted.

Sean CarrollChaos, Hallucinogens, Virtual Reality, and the Science of Self

Chaotic Awesome is a webseries hosted by Chloe Dykstra and Michele Morrow, generally focused on all things geeky, such as gaming and technology. But the good influence of correspondent Christina Ochoa ensures that there is also a healthy dose of real science on the show. It was a perfect venue for Jennifer Ouellette — science writer extraordinaire, as well as beloved spouse of your humble blogger — to talk about her latest masterwork, Me, Myself, and Why: Searching for the Science of Self.

Jennifer’s book runs the gamut from the role of genes in forming personality to the nature of consciousness as an emergent phenomenon. But it also fits very naturally into a discussion of gaming, since our brains tend to identify very strongly with avatars that represent us in virtual spaces. (My favorite example is Jaron Lanier’s virtual lobster — the homuncular body map inside our brain is flexible enough to “grow new limbs” when an avatar takes a dramatically non-human form.) And just for fun for the sake of scientific research, Jennifer and her husband tried out some psychoactive substances that affect the self/other boundary in a profound way. I’m mostly a theorist, myself, but willing to collect data when it’s absolutely necessary.

John PreskillDefending against high-frequency attacks

It was the summer of 2008. I was 22 years old, and it was my second week working in the crude oil and natural gas options pit at the New York Mercantile Exchange (NYMEX.) My head was throbbing after two consecutive weeks of disorientation. It was like being born into a new world, but without the neuroplasticity of a young human. And then the crowd erupted. “Yeeeehawwww. YeEEEeeHaaaWWWWW. Go get ‘em cowboy.”

It seemed that everyone on the sprawling trading floor had started playing Wild Wild West and I had no idea why. After at least thirty seconds, the hollers started to move across the trading floor. They moved away 100 meters or so and then doubled back towards me. After a few meters, he finally got it, and I’m sure he learned a life lesson. Don’t be the biggest jerk in a room filled with traders, and especially, never wear triple-popped pastel-colored Lacoste shirts. This young aspiring trader had been “spurred.”

In other words, someone had made paper spurs out of trading receipts and taped them to his shoes. Go get ‘em cowboy.

I was one academic quarter away from finishing a master’s degree in statistics at Stanford University and I had accepted a full time job working in the algorithmic trading group at DRW Trading. I was doing a summer internship before finishing my degree, and after three months of working in the algorithmic trading group in Chicago, I had volunteered to work at the NYMEX. Most ‘algo’ traders didn’t want this job, because it was far-removed from our mental mathematical monasteries, but I knew I would learn a tremendous amount, so I jumped at the opportunity. And by learn, I mean, get ripped calves and triceps, because my job was to stand in place for seven straight hours updating our mathematical models on a bulky tablet PC as trades occurred.

I have no vested interests in the world of high-frequency trading (HFT). I’m currently a PhD student in the quantum information group at Caltech and I have no intentions of returning to finance. I found the work enjoyable, but not as thrilling as thinking about the beginning of the universe (what else is?) However, I do feel like the current discussion about HFT is lop-sided and I’m hoping that I can broaden the perspective by telling a few short stories.

What are the main attacks against HFT? Three of them include the evilness of: front-running markets, making money out of nothing, and instability. It’s easy to point to extreme examples of algorithmic traders abusing markets, and they regularly do, but my argument is that HFT has simply computerized age-old tactics. In this process, these tactics have become more benign and markets more stable.

Front-running markets: large oil producing nations, such as Mexico, often want to hedge their exposure to changing market prices. They do this by purchasing options. This allows them to lock in a minimum sale price, for a fee of a few dollars per barrel. During my time at the NYMEX, I distinctly remember a broker shouting into the pit: “what’s the price on DEC9 puts.” A trader doesn’t want to give away whether they want to buy or sell, because if the other traders know, then they can artificially move the price. In this particular case, this broker was known to sometimes implement parts of Mexico’s oil hedge. The other traders in the pit suspected this was a trade for Mexico because of his anxious tone, some recent geopolitical news, and the expiration date of these options.

Some confident traders took a risk and faded the market. They ended up making between $1-2 million dollars from these trades, relative to what the fair price was at that moment. I mention relative to the fair price, because Mexico ultimately received the better end of this trade. The price of oil dropped in 2009, and Mexico executed its options enabling it to sell its oil at a higher than market price. Mexico spent $1.5 billion to hedge its oil exposure in 2009.

This was an example of humans anticipating the direction of a trade and capturing millions of dollars in profit as a result. It really is profit as long as the traders can redistribute their exposure at the ‘fair’ market price before markets move too far. The analogous strategy in HFT is called “front-running the market” which was highlighted in the New York Times’ recent article “the wolf hunters of Wall Street.” The HFT version involves analyzing the prices on dozens of exchanges simultaneously, and once an order is published in the order book of one exchange, then using this demand to adjust its orders on the other exchanges. This needs to be done within a few microseconds in order to be successful. This is the computerized version of anticipating demand and fading prices accordingly. These tactics as I described them are in a grey area, but they rapidly become illegal.

Making money from nothing: arbitrage opportunities have existed for as long as humans have been trading. I’m sure an ancient trader received quite the rush when he realized for the first time that he could buy gold in one marketplace and then sell it in another, for a profit. This is only worth the trader’s efforts if he makes a profit after all expenses have been taken into consideration. One of the simplest examples in modern terms is called triangle arbitrage, and it usually involves three pairs of currencies. Currency pairs are ratios; such as USD/AUD, which tells you, how many Australian dollars you receive for one US dollar. Imagine that there is a moment in time when the product of ratios \frac{USD}{AUD}\frac{AUD}{CAD}\frac{CAD}{USD} is 1.01. Then, a trader can take her USD, buy AUD, then use her AUD to buy CAD, and then use her CAD to buy USD. As long as the underlying prices didn’t change while she carried out these three trades, she would capture one cent of profit per trade.

After a few trades like this, the prices will equilibrate and the ratio will be restored to one. This is an example of “making money out of nothing.” Clever people have been trading on arbitrage since ancient times and it is a fundamental source of liquidity. It guarantees that the price you pay in Sydney is the same as the price you pay in New York. It also means that if you’re willing to overpay by a penny per share, then you’re guaranteed a computer will find this opportunity and your order will be filled immediately. The main difference now is that once a computer has been programmed to look for a certain type of arbitrage, then the human mind can no longer compete. This is one of the original arenas where the term “high-frequency” was used. Whoever has the fastest machines, is the one who will capture the profit.

Instability: I believe that the arguments against HFT of this type have the most credibility. The concern here is that exceptional leverage creates opportunity for catastrophe. Imaginations ran wild after the Flash Crash of 2010, and even if imaginations outstripped reality, we learned much about the potential instabilities of HFT. A few questions were posed, and we are still debating the answers. What happens if market makers stop trading in unison? What happens if a programming error leads to billions of dollars in mistaken trades? Do feedback loops between algo strategies lead to artificial prices? These are reasonable questions, which are grounded in examples, and future regulation coupled with monitoring should add stability where it’s feasible.

The culture in wealth driven industries today is appalling. However, it’s no worse in HFT than in finance more broadly and many other industries. It’s important that we dissociate our disgust in a broad culture of greed from debates about the merit of HFT. Black boxes are easy targets for blame because they don’t defend themselves. But that doesn’t mean they aren’t useful when implemented properly.

Are we better off with HFT? I’d argue a resounding yes. The primary function of markets is to allocate capital efficiently. Three of the strongest measures of the efficacy of markets lie in “bid-ask” spreads, volume and volatility. If spreads are low and volume is high, then participants are essentially guaranteed access to capital at as close to the “fair price” as possible. There is huge academic literature on how HFT has impacted spreads and volume but the majority of it indicates that spreads have lowered and volume has increased. However, as alluded to above, all of these points are subtle–but in my opinion, it’s clear that HFT has increased the efficiency of markets (it turns out that computers can sometimes be helpful.) Estimates of HFT’s impact on volatility haven’t been nearly as favorable but I’d also argue these studies are more debatable. Basically, correlation is not causation, and it just so happens that our rapidly developing world is probably more volatile than the pre-HFT world of the last Millennia.

We could regulate away HFT, but we wouldn’t be able to get rid of the underlying problems people point to unless we got rid of markets altogether. As with any new industry, there are aspects of HFT that should be better monitored and regulated, but we should have level-heads and diverse data points as we continue this discussion. As with most important problems, I believe the ultimate solution here lies in educating the public. Or in other words, this is my plug for Python classes for all children!!

I promise that I’ll repent by writing something that involves actual quantum things within the next two weeks!


David Hoggprobabilistic grammar, massive graviton

In a low-research day, I saw two absolutely excellent seminars. The first was Alexander Rush (MIT, Columbia) talking about methods for finding the optimal parsing or syntactical structure for a natural-language sentence using lagrangian relaxation. The point is that the number of parsings is combinatorially large, so you have to do clever things to find good ones. He also looked at machine translation, which is a very related problem. At the end of his talk he discussed extraction of structured information from unstructured text, which might be applicable to the scientific literature.

Over lunch, Sergei Dubovsky (NYU) spoke about massive graviton theories and the recent BICEP2 results. He started by explaining that there are non-pathological gravity modifications in which the graviton is massive in its tensor effects, but doesn't get messed up in its scalar and vector effects. This means you have no change to the "force law" as it were (nor the black-hole solutions nor the cosmological world model) but you modify gravitational radiation. He then said two amazing things: The first is that the BICEP2 result, if it holds up, will put the strongest ever bound on the graviton mass, because it means that gravitational radiation propagated a significant fraction of a Hubble length. The second is that the BICEP2 data are better fit by a model with a tiny but nonzero graviton mass than by the standard massless theory. That's insane! But of course early days and much skepticism about the data, let alone the theory. Great talks today!

BackreactionWill the social sciences ever become hard sciences?

The term “hard science” as opposed to “soft science” has no clear definition. But roughly speaking, the less the predictive power and the smaller the statistical significance, the softer the science. Physics, without doubt, is the hard core of the sciences, followed by the other natural sciences and the life sciences. The higher the complexity of the systems a research area is dealing with, the softer it tends to be. The social sciences are at the soft end of the spectrum.

To me the very purpose of research is making science increasingly harder. If you don’t want to improve on predictive power, what’s the point of science to begin with? The social sciences are soft mainly because data that quantifies the behavior of social, political, and economic systems is hard to come by: it’s huge amounts, difficult to obtain and even more difficult to handle. Historically, these research areas therefore worked with narratives relating plausible causal relations. Needless to say, as computing power skyrockets, increasingly larger data sets can be handled. So the social sciences are finally on the track to become useful. Or so you’d think if you’re a physicist.

But interestingly, there is a large opposition to this trend of hardening the social sciences, and this opposition is particularly pronounced towards physicists who take their knowledge to work on data about social systems. You can see this opposition in the comment section to every popular science article on the topic. “Social engineering!” they will yell accusingly.

It isn’t so surprising that social scientists themselves are unhappy because the boat of inadequate skills is sinking in the data sea and physics envy won’t keep it afloat. More interesting than the paddling social scientists is the public opposition to the idea that the behavior of social systems can be modeled, understood, and predicted. This opposition is an echo of the desperate belief in free will that ignores all evidence to the contrary. The desperation in both cases is based on unfounded fears, but unfortunately it results in a forward defense.

And so the world is full with people who argue that they must have free will because they believe they have free will, the ultimate confirmation bias. And when it comes to social systems they’ll snort at the physicists “People are not elementary particles”. That worries me, worries me more than their clinging to the belief in free will, because the only way we can solve the problems that mankind faces today – the global problems in highly connected and multi-layered political, social, economic and ecological networks – is to better understand and learn how to improve the systems that govern our lives.

That people are not elementary particles is not a particularly deep insight, but it collects several valid points of criticism:

  1. People are too difficult. You can’t predict them.

    Humans are made of a many elementary particles and even though you don’t have to know the exact motion of every single one of these particles, a person still has an awful lot of degrees of freedom and needs to be described by a lot of parameters. That’s a complicated way of saying people can do more things than electrons, and it isn’t always clear exactly why they do what they do.

    That is correct of course, but this objection fails to take into account that not all possible courses of action are always relevant. If it was true that people have too many possible ways to act to gather any useful knowledge about their behavior our world would be entirely dysfunctional. Our societies work only because people are to a large degree predictable.

    If you go shopping you expect certain behaviors of other people. You expect them to be dressed, you expect them to walk forwards, you expect them to read labels and put things into a cart. There, I’ve made a prediction about human behavior! Yawn, you say, I could have told you that. Sure you could, because making predictions about other people’s behavior is pretty much what we do all day. Modeling social systems is just a scientific version of this.

    This objection that people are just too complicated is also weak because, as a matter of fact, humans can and have been modeled with quite simple systems. This is particularly effective in situations when intuitive reaction trumps conscious deliberation. Existing examples are traffic flows or the density of crowds when they have to pass through narrow passages.

    So, yes, people are difficult and they can do strange things, more things than any model can presently capture. But modeling a system is always an oversimplification. The only way to find out whether that simplification works is to actually test it with data.

  2. People have free will. You cannot predict what they will do.

    To begin with it is highly questionable that people have free will. But leaving this aside for a moment, this objection confuses the predictability of individual behavior with the statistical trend of large numbers of people. Maybe you don’t feel like going to work tomorrow, but most people will go. Maybe you like to take walks in the pouring rain, but most people don’t. The existence of free will is in no conflict with discovering correlations between certain types of behavior or preferences in groups. It’s the same difference that doesn’t allow you to tell when your children will speak the first word or make the first step, but that almost certainly by the age of three they’ll have mastered it.

  3. People can understand the models and this knowledge makes predictions useless.

    This objection always stuns me. If that was true, why then isn’t obesity cured by telling people it will remain a problem? Why are the highways still clogged at 5pm if I predict they will be clogged? Why will people drink more beer if it’s free even though they know it’s free to make them drink more? Because the fact that a prediction exists in most cases doesn’t constitute any good reason to change behavior. I can predict that you will almost certainly still be alive when you finish reading this blogpost because I know this prediction is exceedingly unlikely to make you want to prove it wrong.

    Yes, there are cases when people’s knowledge of a prediction changes their behavior – self-fulfilling prophecies are the best-known examples of this. But this is the exception rather than the rule. In an earlier blogpost, I referred to this as societal fixed points. These are configurations in which the backreaction of the model into the system does not change the prediction. The simplest example is a model whose predictions few people know or care about.

  4. Effects don’t scale and don’t transfer.

    This objection is the most subtle one. It posits that the social sciences aren’t really sciences until you can do and reproduce the outcome of “experiments”, which may be designed or naturally occurring. The typical social experiment that lends itself to analysis will be in relatively small and well-controlled communities (say, testing the implementation of a new policy). But then you have to extrapolate from this how the results will be in larger and potentially very different communities. Increasing the size of the system might bring in entirely new effects that you didn’t even know of (doesn’t scale), and there are a lot of cultural variables that your experimental outcome might have depended on that you didn’t know of and thus cannot adjust for (doesn’t transfer). As a consequence, repeating the experiment elsewhere will not reproduce the outcome.

    Indeed, this is likely to happen and I think it is the major challenge in this type of research. For complex relations it will take a long time to identify the relevant environmental parameters and to learn how to account for their variation. The more parameters there are and the more relevant they are, the less the predictive value of a model will be. If there are too many parameters that have to be accounted for it basically means doing experiments is the only thing we can ever do. It seems plausible to me, even likely, that there are types of social behavior that fall into this category, and that will leave us with questions that we just cannot answer.

    However, whether or not a certain trend can or cannot be modeled we will only know by trying. We know that there are cases where it can be done. Geoffry West’s city theory I find a beautiful example where quite simple laws can be found in the midst of all these cultural and contextual differences.
In summary.

The social sciences will never be as “hard” as the natural sciences because there is much more variation among people than among particles and among cities than among molecules. But the social sciences have become harder already and there is no reason why this trend shouldn’t continue. I certainly hope it will continue because we need this knowledge to collectively solve the problems we have collectively created.

April 07, 2014

Sean CarrollPeregrinations

Running around these days, doing some linear combination of actual work and talking about things. (Sometimes the talking leads to actual work, so it’s not a total loss.) If you happen to be in a sciencey kind of mood when I’m in your vicinity, feel free to come to a talk and say hi!

  • Today, if you happen to be in Walla Walla, Washington, I’ll be giving the Brattain lecture at Whitman College, on the Higgs boson and the hunt therefor.
  • Next week I’ll be in Austin, TX. On Thursday April 17, I’ll be speaking in the Distinguished Lecture Series, once again on the marvels of the Higgs.
  • In May I’ll be headed to the Big Apple twice. The first time will be on May 7 for an Intelligence Squared Debate. The subject will be “Is There Life After Death?” I’ll be saying no, and Steven Novella will be along there with me; our opponents will be Eben Alexander and Raymond Moody.
  • Then back home for a bit, and then back to NYC again, for the World Science Festival. I’ll be participating in a few events there between May 29 and 31, although precise spatio-temporal locations have yet to be completely determined. One will be about “Science and Story,” one will be a screening of Particle Fever, and one will be a book event.
  • I won’t even have a chance to return home from NYC before jetting to the UK for the Cheltenham Science Festival. Once again, participating in a few different events, all on June 3/4: something on Science and Hollywood, something on the Higgs, and something on the arrow of time. Check local listings!
  • A chance I will be in Oxford right after Cheltenham, but nothing’s settled yet.
  • That’s it. Looking forward to a glorious summer full of real productivity.

n-Category Café On a Topological Topos

Guest post by Sean Moss

In this post I shall discuss the paper “On a Topological Topos” by Peter Johnstone. The basic problem is that algebraic topology needs a “convenient category of spaces” in which to work: the category 𝒯\mathcal{T} of topological spaces has few good categorical properties beyond having all small limits and colimits. Ideally we would like a subcategory, containing most spaces of interest, which is at least cartesian closed, so that there is a useful notion of function space for any pair of objects. A popular choice for a “convenient category” is the full subcategory of 𝒯\mathcal{T} consisting of compactly-generated spaces. Another approach is to weaken the notion of topological space, i.e. to embed 𝒯\mathcal{T} into a larger category, hopefully with better categorical properties.

A topos is a category with enough good properties (including cartesian closedness) that it acts like the category of sets. Thus a topos acts like a mathematical universe with ‘sets’, ‘functions’ and its own internal logic for manipulating them. It is exciting to think that if a “convenient topos of spaces” could be found, then its logical aspects could be applied to the study of its objects. The set-like nature of toposes might make it seem unlikely that this can happen. For instance, every topos is balanced, but the category of topological spaces is famously not. However, sheaves (the objects in Grothendieck toposes) originate from geometry and already behave somewhat like generalized spaces.

I shall begin by elaborating on this observation about Grothendieck toposes, and briefly examine some previous attempts at a “topological topos”. I shall explore the idea of replacing open sets with convergent sequences and see how this leads us to Johnstone’s topos. Finally I shall describe how Johnstone’s topos is a good setting for homotopy theory.

I would to thank Emily Riehl for organizing the Kan Extension Seminar and for much useful feedback. Thanks go also to the other seminar participants for being a constant source of interesting perspectives, and to Peter Johnstone, for his advice and for writing this incredible paper.

Are there toposes of spaces?

We shall need to be flexible about what we mean by “space”. For the rest of this post I shall try to use the term “topological space” in a strict technical sense (a set of points plus specified open sets), whereas “space” will be a nebulous concept. The idea is that spaces have existence regardless of having been implemented as a topological space or not, and may naturally have more (or perhaps less) structure. Topology merely forms one setting for their rigorous study. In topology we can only detect the topological properties of spaces. For example, \mathbb{R} and (0,1)(0,1) are isomorphic as topological spaces, but they are far from being the same space: consider how different their implementations as, say, metric spaces are. Some spaces are naturally considered as having algebraic or smooth structure. The type of question one wishes to ask about a space will bear upon the type of object as which it should be implemented.

An extremely important class of toposes consists of the Grothendieck toposes, which are categories of sheaves on a site. A site is a small category together with a Grothendieck coverage (also known as a Grothendieck topology). Informally, the Grothendieck coverage tells us how some objects can be “covered” by maps coming out of other objects. In the special case where the site is a topological space, the objects are open sets and the coverage tells us that an open set is covered by the inclusions of any family of open sets whose union is all of that open set. A sheaf on a site is then a contravariant Set\mathrm{Set}-valued functor on the underlying category (a presheaf) which satisfies a “unique patching” condition with respect to each covering sieve.

In the following two senses, a Grothendieck topos always behaves like a category of spaces:

(A) One way to describe the properties of a space is to consider the maps into that space. This is the idea behind the homotopy groups, where we consider (homotopy classes of) maps from the nn-sphere into a space. Given a small category 𝒞\mathcal{C}, each object is determined by knowing all the arrows into it and how these arrows “restrict” along other arrows, i.e. precisely the data of the representable presheaf. A non-representable presheaf can be viewed as a generalized object of 𝒞\mathcal{C}, which is testable by the ‘classical’ objects of 𝒞\mathcal{C}: it is described entirely by what the maps into it from objects of 𝒞\mathcal{C} ought to be. If we have in mind that 𝒞\mathcal{C} is some category of spaces, with some sense in which some spaces are covered by families of maps out of other spaces, (i.e. we have a Grothendieck coverage), then we should be able to patch maps into these generalized spaces together. So the topos of sheaves on this site is a setting in which we may be able to implement certain spaces, if we wish to study their properties testable by objects of 𝒞\mathcal{C}.

(B) The category of presheaves on a small category 𝒞\mathcal{C} is its free cocompletion. Intuitively, it is the category of objects obtained by formally gluing together objects of 𝒞\mathcal{C}. The use of the word “gluing” is itself a spatial metaphor. CW-complexes are built out of gluing together cells - simplicial sets are instructions for carrying out this gluing. Manifolds are built from gluing together open subsets of Euclidean space. Purely formal ‘gluing’ is not quite sufficient: the Yoneda embedding of 𝒞\mathcal{C} into its presheaves typically does not preserve any colimits already in 𝒞\mathcal{C}. But if 𝒞\mathcal{C} is a category of spaces, its objects are not neutral with respect to each other: there may be a suitable Grothendieck coverage on 𝒞\mathcal{C} which tells us how some objects can cover others. The topos of sheaves is then the category of objects obtained by formally gluing objects of 𝒞\mathcal{C} in a way that respects these coverings. This is strongly connected with the preservation of colimits by the embedding of 𝒞\mathcal{C} into the sheaves. Colimits in the presheaf topos are constructed pointwise; to get the sheaf colimit one applies the reflection into the category of sheaves (“sheafification”) to the presheaf colimit. The more covers imposed on 𝒞\mathcal{C}, the more work is done by the sheafification, so the closer we end up to the original colimit.

Are there toposes in topology?

It is far from clear that we can choose a site for which the space-like behaviour of sheaves accords with the usual topological intuition. If we want to use a topological topos for homotopy theory, then ideally it should contain objects that we can recognize as the CW-complexes, and we should be able to construct them via more or less the usual colimits.

Attempt 1: The “gros topos” of Giraud

The idea is to take sheaves on the ‘site’ of topological spaces, where covers are given by families of open inclusions of subspaces whose union is the whole space. We do not automatically get a topos unless the site is small, so instead take some small, full subcategory 𝒞\mathcal{C} of 𝒯\mathcal{T}, which is closed under open subspaces. The gros topos is the topos of sheaves for this site.

The Yoneda embedding exhibits 𝒞\mathcal{C} as a subcategory, and in fact we can ‘embed’ 𝒯\mathcal{T} via the functor Xhom 𝒯(,X)X \mapsto \hom_\mathcal{T}(-,X), this will be full and faithful on a fairly large subcategory. By (B) one may like to consider the gros topos as the category of spaces glued together from objects of 𝒞\mathcal{C}. This turns out not to be useful, since the site does not have enough covers for colimits to agree with those in 𝒯\mathcal{T}. Moreover the site is so large that calculations are difficult.

Attempt 2: Lawvere’s topos

We use observation (A). Motivated by the use of paths in homotopy theory, we take MM to be the full subcategory of 𝒯\mathcal{T} whose only object is the closed unit interval II. So MM is the monoid of continuous endomorphisms of II. Lawvere’s topos \mathcal{L} is the topos of sheaves on MM with respect to the canonical Grothendieck coverage (the largest Grothendieck coverage on MM for which hom M(,I)\hom_M(-,I) is a sheaf).

Then an object XX of \mathcal{L} is a set X(I)X(I) of paths, together with, for any continuous γ:II\gamma\colon I \to I, a reparametrization map X(γ):X(I)X(I)X(\gamma) \colon X(I) \to X(I), where this assignment is functorial. The points of such a space are given by natural transformations 1X1 \to X, i.e. ‘constant paths’ or paths which are fixed by every reparametrization. We can see which point a path visits at time tt by reparametrizing that path by the constant map III \to I with value tt. A word of caution: a given object in \mathcal{L} may have distinct paths which agree on points for all time.

This site is much easier to calculate with than the gros site (once we have a handle on the canonical coverage). Again there is a functor P:𝒯P \colon \mathcal{T} \to \mathcal{L} given by Xhom 𝒯(I,X)X \mapsto \hom_\mathcal{T}(I,X), which is full and faithful on a fairly large subcategory (including CW-complexes). However, it is still the case that the site could do with more covers: the functor PP does not preserve all the colimits used to build up CW-complexes. By observation (B), an object of \mathcal{L} is obtained by gluing together copies of the unit interval II, so it is possible to construct the circle S 1S^1 out of copies of II, but we cannot do this in the usual way. The coequalizer of II by its endpoints in \mathcal{L} is not S 1S^1, but a “signet-ring”: it is a circle with a ‘lump’, through which a path can cross only if waits there for non-zero time. We cannot solve this problem by adding in more covers, because the coverage is already canonical (adding in more covers will evict the representable hom 𝒯(,I)\hom_\mathcal{T}(-,I) from the topos).

The key idea in Johnstone’s topos is to replace paths with convergent sequences. Given a topological space XX, a convergent sequence in XX is a function from a:{}Xa \colon \mathbb{N}\cup\{\infty\} \to X such that whenever UXU \subseteq X is an open set containing a a_\infty, then there exists an NN such that a nUa_n \in U for all n>Nn \gt N. The convergent sequences are precisely the continuous maps out of {}\mathbb{N}\cup\{\infty\} when we give it the topology that makes it the one-point compactification of the discrete space \mathbb{N} - we denote this topological space by +\mathbb{N}^+.

Convergent sequences as primitive

It is a basic theorem in general topology that, given a function f:XYf\colon X \to Y between topological spaces, if it is continuous then it preserves convergent sequences. The converse is not true for general topological spaces, but it is true whenever YY is a sequential space. Given a topological space XX, A set UXU \subseteq X is sequentially open if for any convergent sequence (a n)(a_n), with a Ua_\infty \in U, (a n)(a_n) is eventually in UU. (Clearly any open subset is sequentially open.) A topological space is then said to be sequential if all of its sequentially open sets are open. The sequential spaces include all first-countable spaces and in fact they can be characterized as the topological quotients of metrizable spaces, so they certainly include all CW-complexes.

The notion of convergent sequence is arguably more intuitive than that of open set. For example, each convergent sequence gives you concrete data about the nearness of some family of points to another point, whereas open sets only give you such data when the topology (or at least a neighbourhood basis) is considered as a whole. It would be compelling to define a continuous function as one that preserves convergent sequences. This motivates the study of subsequential spaces.

A subsequential space consists of a set XX (of points) and family of “convergent sequences”: a specified subset of the set of functions {}X\mathbb{N}\cup\{\infty\} \to X, such that:

  1. for every point xXx \in X, the constant sequence (x)(x) converges to xx;
  2. if (x n)(x_n) converges to xx, then so does every subsequence of (x n)(x_n);
  3. if (x n)(x_n) is a sequence and xx is a point such that every subsequence of (x n)(x_n) contains a (sub)subsequence converging to xx, then (x n)(x_n) converges to xx.

The third axiom is the general form of intertwining two or more sequences with the same limit or changing a finite initial segment of a sequence. Note that there is no ‘Hausdorff‘-style condition on the convergent sequences: a sequence may converge to more than one limit. A continuous map between subsequential spaces XYX \to Y is a function from the points of XX to the points of YY that preserves convergence of sequences.

The axioms above are all true of the set of convergent sequences which arise from a topology on a set. In fact, this process gives a full and faithful embedding of sequential spaces into subsequential spaces. Thus sequential spaces live inside both topological and subsequential spaces. They are coreflective in the former and reflective in the latter: given a topological space XX, its sequentially open sets constitute a new (finer) topology; given a subsequential space YY, we can consider the “sequentially open” sets with respect to its convergent sequences, and then take all convergent sequences in the resulting (sequential) topological space. Observe that the sense of the adjunction in each case comes from the fact that we either throw in more open sets - so there is a natural map (X) seqX(X)_\text{seq} \to X, or throw in more convergent sequences - so there is a natural map Y(Y) seqY \to (Y)_\text{seq}.

In the following I shall denote the category of sequential spaces by \mathcal{F} and that of subsequential spaces by \mathcal{F}'.

Johnstone’s topos

Let Σ\Sigma be the full subcategory of 𝒯\mathcal{T} on the objects 11 (the singleton space) and +\mathbb{N}^+ (the one-point compactification of the discrete space of natural numbers). The arrows in this category can be described without topology as well: as functions, the maps + +\mathbb{N}^+ \to \mathbb{N}^+ are the eventually constant ones and the ones that “tend to infinity”.

Given an infinite subset TT \subseteq \mathbb{N}, let f Tf_T denote the unique order-preserving injection + +\mathbb{N}^+ \to \mathbb{N}^+ whose image is T{}T \cup \{\infty\}. One can check that there is a Grothendieck coverage JJ on Σ\Sigma where 11 is covered by only the maximal sieve, and where +\mathbb{N}^+ is covered by any sieve RR such that:

  1. RR contains all of the points n:1 +n \colon 1 \to \mathbb{N}^+, n +n \in \mathbb{N}^+.
  2. For any infinite subset TT \subseteq \mathbb{N} there exists an infinite subset TTT' \subseteq T such that f TRf_{T'} \in R.

The topos \mathcal{E} is then defined to be Sh(Σ,J)\mathrm{Sh}(\Sigma,J).

The objects in our topos are a slight generalization of subsequential space. If XX \in \mathcal{E}, then X(1)X(1) is its set of points, and X( +)X(\mathbb{N}^+) is its set of convergent sequences. Each point n:1 +n \colon 1 \to \mathbb{N}^+ induces a ‘projection map’ X(n):X( +)X(1)X(n)\colon X(\mathbb{N}^+) \to X(1), giving you the point of the sequence at time nn. The unique map +1\mathbb{N}^+ \to 1 induces a map X(1)X( +)X(1) \to X(\mathbb{N}^+), which sends each point to a canonical choice of constant sequence. Note that there may be more than one convergent sequence with the same points, thus it may be helpful to think of X( +)X(\mathbb{N}^+) as the set of proofs of convergence for sequences.

Clearly we can embed \mathcal{F}', the subsequential spaces, into \mathcal{E}: the points are the same, and the convergence proofs are just the convergent sequences. The first two axioms are satisfied because of the equations that hold in Σ\Sigma. The third axiom is encoded into the coverage. Conversely, any object XX of \mathcal{E} for which the projection maps X(n):X( +)X(1)X(n)\colon X(\mathbb{N}^+) \to X(1), n +n \in \mathbb{N}^+ are jointly injective is isomorphic to one coming from a subsequential space. There is a functor H:𝒯H \colon \mathcal{T} \to \mathcal{E} sending Xhom 𝒯(,X)X \mapsto \hom_\mathcal{T}(-,X), and it is indeed sheaf-valued since it is equal to the composite of the coreflection 𝒯\mathcal{T} \to \mathcal{F} with the inclusions \mathcal{F} \to \mathcal{F}' \to \mathcal{E}. In fact, the Grothendieck coverage defining \mathcal{E} is canonical, so it is the largest for which this functor is well-defined.

We can use observation (B) to think of \mathcal{E} as all spaces constructed from gluing sequences together. It is just about possible that we could have motivated the construction of \mathcal{E} this way: classically, any sequential space XX is the quotient in 𝒯\mathcal{T} of a metrizable space, which may be taken to be a disjoint union of copies of +\mathbb{N}^+ - one for every convergent sequence in XX. Compare this with the canonical representation of a presheaf as a colimit of representables (one for each of its elements).

Colimits

It turns out that \mathcal{F}' is the subcategory of ¬¬\neg\neg-separated objects in \mathcal{E}, hence it is a reflective subcategory. \mathcal{F} is reflective in \mathcal{F}', hence it is also reflective in \mathcal{E}. In particular, all limits in \mathcal{F} are preserved by the inclusion into \mathcal{E}. Take some caution, however, since products do not agree with those in 𝒯\mathcal{T}: one has to take the sequential coreflection of the topological product. This is only a minor issue; having to modify the product arises in other “convenient categories” such as compactly-generated spaces.

The colimits in \mathcal{F} do agree with those in 𝒯\mathcal{T} because it is a coreflective subcategory. Surprisingly, the inclusion \mathcal{F} \to \mathcal{E} preserves many of these colimits.

Theorem Let XX be a sequential space, and {U ααA}\{U_\alpha \mid \alpha \in A\} an open cover of XX. Then the obvious colimit diagram in \mathcal{F}: U αU β U α U βU γ U β X U γ \begin{matrix} U_\alpha\cap U_\beta & \rightarrow & U_\alpha & & \\ & \searrow & & \searrow & \\ U_\beta \cap U_\gamma & \rightarrow & U_\beta & \rightarrow & X \\ \vdots & \searrow & & \nearrow & \\ & & U_\gamma & & \\ & & \vdots & & \end{matrix} is preserved by the embedding \mathcal{F} \to \mathcal{E}.

Proof The recipe for this sort of theorem is: take the colimit in presheaves, show that the comparison map is monic, then show that it is JJ-dense, for then it will exhibit XX as the colimit upon reflecting into the topos \mathcal{E}. The colimit LL in presheaves is calculated “objectwise”, so LL has the same points of XX, but only those convergent sequences which are entirely within some U αU_\alpha (hence the comparison map LXL \to X is monic). To sheafify, we need to add in all those sequences xX( +)x \in X(\mathbb{N}^+) which are locally in LL, i.e. for which the sieve {f:? +X(f)(x)L(?)} \{f \colon ? \to \mathbb{N}^+ \mid X(f)(x) \in L(?) \} in Σ\Sigma is JJ-covering. For any xX( +)x \in X(\mathbb{N}^+), this sieve clearly contains all the points 1 +1 \to \mathbb{N}^+. But xx must also be eventually within one of the U αU_\alpha, so the second condition for the covering sieves is also satisfied. \square

There are several other colimit preservation results one can talk about (with similar proofs to the above). The amazing consequence of these is that the colimits used to construct CW-complexes are all preserved by the embedding \mathcal{F} \to \mathcal{E}. Thus classical homotopy theory embeds into \mathcal{E} and we have successfully found a topos of spaces which agrees with the classical theory.

Geometric realization

Let Δ\Delta be the category of non-zero finite ordinals and order-preserving maps. Then objects of the presheaf category [Δ op,Set][\Delta^\mathrm{op},\mathrm{Set}] are known as simplicial sets.

Theorem [Δ op,Set][\Delta^\mathrm{op},\mathrm{Set}] is the classifying topos for intervals in Set\mathrm{Set}-toposes.

The closed unit interval [0,1][0,1] is sequential and is in fact an interval (a totally ordered object with distinct top and bottom elements). Thus it corresponds to a geometric morphism [Δ op,Set]\mathcal{E} \to [\Delta^\mathrm{op},\mathrm{Set}] (an adjunction (f f )(f^\star \dashv f_\star) with f f^\star left-exact).

Theorem If S[Δ op,Set]S \in [\Delta^\mathrm{op},\mathrm{Set}] is a simplicial set, then f (S)f^\star(S) is its geometric realization, considered as a sequential space and hence as an object of \mathcal{E}. If XX \in \mathcal{E} is a sequential space, then f (E)f_\star(E) is its singular complex.

The usual geometric realization is not left-exact if considered to take values in 𝒯\mathcal{T}, one must choose a “convenient subcategory” first, and then there is some work to do in proving it. Here the left-exactness just arises out of the general theory of geometric morphisms. Should we wish to do so, the above method allows us to replace [0,1][0,1] with any other object that the internal logic of \mathcal{E} sees as an interval to get a different realization of simplicial sets.

The above is far from a complete survey of “On a Topological Topos”, which contains several more results of interest relating to \mathcal{E} and captures the elegance of using the site Σ\Sigma for calculation - I thoroughly recommend taking a look if you know some topos theory. We have seen enough though to understand that for many spaces the sequential properties align with the topological properties. Unfortunately, \mathcal{E} is yet to receive the attention it deserves.

Matt StrasslerA Week in Canada

It’s been a quiet couple of weeks on the blog, something which often indicates that it’s been anything but quiet off the blog. Such was indeed the case recently.

For one thing, I was in Canada last week. I had been kindly invited to give two talks at the University of Western Ontario, one of Canada’s leading universities for science. One of the talks, the annual Nerenberg lecture (in memory of Professor Morton Nerenberg) is intended for the general public, so I presented a lecture on The 2013 Nobel Prize: The 50-Year Quest for the Higgs Boson. While I have given a talk on this subject before (an older version is on-line) I felt some revisions would be useful. The other talk was for members of the applied mathematics department, which hosts a diverse group of academics. Unlike a typical colloquium for a physics department, where I can assume that the vast majority of the audience has had university-level quantum mechanics, this talk required me to adjust my presentation for a much broader scientific audience than usual.  I followed, to an extent, my website’s series on Fields and Particles and on How the Higgs Field Works, both of which require first-year university math and physics, but nothing more. Preparation of the two talks, along with travel, occupied most of my free time over recent days, so I haven’t been able to write, or even respond to readers’ questions, unfortunately.

I also dropped in at Canada’s Perimeter Institute on Friday, when it was hosting a small but intense one-day workshop on the recent potentially huge discovery by the BICEP2 experiment of what appears to be a signature of gravitational waves from the early universe. This offered me an opportunity to hear some of the world’s leading experts talking about the recent measurement and its potential implications (if it is correct, and if the simplest interpretation of it is correct). Alternative explanations of the experiment’s results were also mentioned. Also, there was a lot of discussion about the future, both the short-term and the long-term. Quite a few measurements will be made in the next six to twelve months that will shed further light on the BICEP2 measurement, and on its moderate conflict with the simplest interpretation of certain data from the Planck satellite.  Further down the line, a very important step will be to reduce the amount of B-mode polarization that arises from the gravitational lensing of E-mode polarization, a method called “delensing”; this will make it easier to observe the B-mode polarization from gravitational waves (which is what we’re interested in) even at rather small angular scales (high “multipoles”).   Looking much further ahead, we will be hearing a lot of discussion about huge new space-based gravitational wave detectors such as BBO [Big Bang Observatory].  (Actually the individual detectors are quite small, but they are spaced at great distances.) These can potentially measure gravitational waves whose wavelength is comparable to the size of the Earth’s orbit or even larger, which is still much smaller than those apparently detected by BICEP2 in the polarization of the cosmic microwave background. Anyway, assuming what BICEP2 has really done is discover gravitational waves from the very early universe, this subject now a very exciting future and there is lots to do, to discuss and to plan.

I wish I could promise to provide a blog post summarizing carefully what I learned at the conference. But unfortunately, that brings me to the other reason blogging has been slow. While I was away, I learned that the funding situation for science in the United States is even worse than I expected. Suffice it to say that this presents a crisis that will interfere with blogging work, at least for a while.


Filed under: Astronomy, Higgs, Quantum Gravity Tagged: astronomy, Higgs, QuantumGravity

April 06, 2014

Tommaso DorigoStandard Model Or Minimal SUSY ?

If I look back at the first times I discussed the important graph of the top quark versus W boson mass, nine years ago, I am amazed at observing how much progress we have made since then. The top quark mass in 2005 was known with 2-3 GeV precision, the W boson mass with 35 MeV precision, and we did not know where the Higgs boson was, or if there was one.

read more