Planet Musings

January 18, 2017

Georg von HippelIf you speak German ...

... you might find this video amusing.


January 17, 2017

Doug NatelsonWhat is the difference between science and engineering?

In my colleague Rebecca Richards-Kortum's great talk at Rice's CUWiP meeting this past weekend, she spoke about her undergrad degree in physics at Nebraska, her doctorate in medical physics from MIT, and how she ended up doing bioengineering.  As a former undergrad engineer who went the other direction, I think her story did a good job of illustrating the distinctions between science and engineering, and the common thread of problem-solving that connects them.

In brief, science is about figuring out the ground rules about how the universe works.   Engineering is about taking those rules, and then figuring out how to accomplish some particular task.   Both of these involve puzzle-like problem-solving.  As a physics example on the experimental side, you might want to understand how electrons lose energy to vibrations in a material, but you only have a very limited set of tools at your disposal - say voltage sources, resistors, amplifiers, maybe a laser and a microscope and a spectrometer, etc.  Somehow you have to formulate a strategy using just those tools.  On the theory side, you might want to figure out whether some arrangement of atoms in a crystal results in a lowest-energy electronic state that is magnetic, but you only have some particular set of calculational tools - you can't actually solve the complete problem and instead have to figure out what approximations would be reasonable, keeping the essentials and neglecting the extraneous bits of physics that aren't germane to the question.

Engineering is the same sort of process, but goal-directed toward an application rather than specifically the acquisition of new knowledge.  You are trying to solve a problem, like constructing a machine that functions like a CPAP, but has to be cheap and incredibly reliable, and because of the price constraint you have to use largely off-the-shelf components.  (Here's how it's done.)

People act sometimes like there is a vast gulf between scientists and engineers - like the former don't have common sense or real-world perspective, or like the latter are somehow less mathematical or sophisticated.  Those stereotypes even comes through in pop culture, but the differences are much less stark than that.  Both science and engineering involve creativity and problem-solving under constraints.   Often which one is for you depends on what you find most interesting at a given time - there are plenty of scientists who go into engineering, and engineers can pursue and acquire basic knowledge along the way.  Particularly in the modern, interdisciplinary world, the distinction is less important than ever before.

January 16, 2017

John BaezSolar Irradiance Measurements

guest post by Nadja Kutz

This blog post is based on a thread in the Azimuth Forum.

The current theories about the Sun’s life-time indicate that the Sun will turn into a red giant in about 5 billion years. How and when this process is going to be destructive to the Earth is still debated. Apparently, according to more or less current theories, there has been a quasilinear increase in luminosity. On page 3 of

• K.-P. Schröder and Robert Connon Smith, Distant future of the Sun and Earth revisited, 2008.

we read:

The present Sun is increasing its average luminosity at a rate of 1% in every 110 million years, or 10% over the next billion years.

Unfortunately I feel a bit doubtful about this, in particular after I looked at some irradiation measurements. But let’s recap a bit.

In the Azimuth Forum I asked for information about solar irradiance measurements . Why I was originally interested in how bright the Sun is shining is a longer story, which includes discussions about the global warming potential of methane. For this post I prefer to omit this lengthy historical survey about my original motivations (maybe I’ll come back to this later). Meanwhile there is an also a newer reason why I am interested in solar irradiance measurements, which I want to talk about here.

Strictly speaking I was not only interested in knowing more about how bright the sun is shining, but how bright each of its ‘components’ is shining. That is, I wanted to see spectrally resolved solar irradiance measurements—and in particular, measurements in the range between the wavelengths of roughly 650 and 950 nanometers.

This led me to the the Sorce mission, which is a NASA sponsored satellite mission, whose website is located at the University of Colorado. The website very nicely provides an interactive interface including a fairly clear and intuitive LISIRD interactive app with which the spectral measurements of the Sun can be studied.

As a side remark I should mention that this NASA mission belongs to the NASA Earth Science mission, which is currently threatened to be scrapped.

By using this app, I found in the 650–950 nanometer range a very strange rise in radiation between 2003 and 2016, which happened mainly in the last 2-3 years. You can see this rise here (click to enlarge):

verlauf774-51linie
spectral line 774.5nm from day 132 to 5073, day 132 starting Jan 24 in 2003, day 5073 is end of 2016

Now, fluctuations within certain spectral ranges within the Sun’s spectrum are not news. Here, however, it looked as if a rather stable range suddenly started to change rather “dramatically”.

I put the word “dramatically” in quotes for a couple of reasons.

Spectral measurements are complicated and prone to measurement errors. Subtle issues of dirty lenses and the like are already enough to suggest that this is no easy feat, so that this strange rise might easily be due to a measurement failure. Moreover, as I said, it looked as this was a fairly stable range over the course of ten years. But maybe this new rise in irradiation is part of the 11 years solar cycle, i.e., a common phenomenon. In addition, although the rise looks big, it may overall still be rather subtle.

So: how subtle or non-subtle is it then?

In order to assess that, I made a quick estimate (see the Forum discussion) and found that if all the additional radiation would reach the ground (which of course it doesn’t due to absorption), then on 1000 square meters you could easily power a lawn mower with that subtle change! I.e., my estimate was 1200 watts for that patch of lawn. Whoa!

That was disconcerting enough to download the data and linearly interpolate it and calculate the power of that change. I wrote a program in Javascript to do that. The computer calculations revealed an answer of 1000 watts, i.e., my estimate was fairly close. Whoa again!

How does this translate to overall changes in solar irradiance? Some increase had already been noticed. NASA wrote 2003 on its webpage:

Although the inferred increase of solar irradiance in 24 years, about 0.1 percent, is not enough to cause notable climate change, the trend would be important if maintained for a century or more.

That was 13 years ago.

I now used my program to calculate the irradiance for one day in 2016 between the wavelengths of 180.5 nm and 1797.62 nm, a quite big part of the solar spectrum, and got the value 627 W/m2. I computed the difference between this and one day in 2003, approximately one solar cycle earlier. I got 0.61 W/m2, which is 0.1% in 13 years, rather then 24 years. Of course this is not an average value, and not really well adjusted to the sun cycle, and fluctuations play a big role in some parts of the spectrum, but well—this might indicate that the overall rate of rise in solar radiation may have doubled. Likewise concerning the question of the sun’s luminosity: for assessing luminosity one would need to take the concrete satellite-earth orbit at the day of measurement into account, as the distance to the sun varies. But still, on a first glance this all appears disconcerting.

Given that this spectral range has for example an overlap with the absorption of water (clouds!), this should at least be discussed.

See how the spectrum splits into a purple and dark red line in the lower circle? (Click to enlarge.)

bergbildtag132tag5073at300kreis
Difference in spectrum between day 132 and 5073

The upper circle displays another rise, which is discussed in the forum.

So concluding, all this looks as if this needs to be monitored a bit more closely. It is important to see whether these rises in irradiance are also displayed in other measurements, so I asked in the Azimuth Forum, but so far have gotten no answer.

The Russian Wikipedia site about solar irradiance unfortunately contains no links to Russian satellite missions (if I haven’t overlooked something), and there exists no Chinese or Indian Wikipedia about solar irradiance. I also couldn’t find any publicly accessible spectral irradiance measurements on the ESA website (although they have some satellites out there). In December I wrote an email to the head of the section solar radiometry of the World Radiation Center (WRC) Wolfgang Finsterle, but I’ve had no answer yet.

In short: if you know about publicly available solar spectral irradiance measurements other than the LISIRD ones, then please let me know.


David Hoggso many things (I love Wednesdays)

In the stars group meeting at CCA, there was huge attendance today. David Spergel (CCA) opened by giving a sense of the WFIRST GO and GI discussion that will happen this week at CCA. The GI program is interesting: It is like an archival program within WFIRST. This announcement quickly ran into an operational discussion about what WFIRST can do to avoid saturation of bright stars.

Katia Cunha (Observatorio Nacional, Brazil) spoke about two topics in APOGEE. The first is that they have found new elements in the spectra! They did this by looking at the spectra of s-process-enhanced stars (metal-poor ones) and finding strong, unidentified lines. This is exciting, because before this, APOGEE has no measurements of the s process. The second topic is that they are starting to get working M-dwarf models, which is a first, and can measure 13 element abundances in M dwarfs. Verne Smith (NOAO) noted that this is very important for the future use of these spectrographs and exoplanet science in the age of TESS. On this latter point, the huge breakthrough was in improvements to the molecular line lists.

Dave Bennett (GSFC) talked to us about observations of the Bulge with K2 and other instruments to do microlensing, microlensing parallax, and exoplanet discovery. He noted that there isn't a huge difference between doing characterization and doing search: The photometry has to be good to find microlensing events and not be fooled by false positives. He is in NYC this week working with Dun Wang (NYU).

Jeffrey Carlin (NOAO) led a discussion of detailed abundances for Sagittarius-stream stars as obtained with a CFHT spectrograph fiber-fed from Gemini N. These abundances might unravel the stream for us, and inform dynamical models. This morphed into a conversation about why the stellar atmosphere models are so problematic, which we didn't resolve (surprised?). I pitched a project in which we use Carlin's data at high resolution to train a model for the LAMOST data, as per Anna Y. Q. Ho (Caltech), and then do science with tens of thousands of stars.

In the cosmology group meeting, we discussed the possibility of evaluating (directly) the likelihood for a CMB map or time-ordered data given the C-ells and a noise model. As my loyal reader knows, this requires not just performing solve (inverse multiplication) operations but also (importantly) determinant evaluations. For the discussion, mathematicians Mike O'Neil (NYU) and Leslie Greengard (CCA) and Charlie Epstein (Penn) joined us, with Mike O’Neil leading the discussion about how we might achieve this, computationally. O’Neil outlined two strategies, one of which takes advantage of a possible HODLR form (Ambikasaran et al), another of which takes advantage of the spherical-harmonics transform. There was some disagreement about whether the likelihood function is worth computing, with Hogg on one end (guess which) and Naess and Hill and Spergel more skeptical. Spergel noted that if we could evaluate the LF for the CMB, it opens up the possibility of doing it for LSS or intensity mapping in a three-dimensional (thick) spherical shell (think: redshift distortions and fingers of god and so on).

Between meetings, I discussed deconvolutions of the TGAS color-magnitude diagram with Leistedt and Anderson, and low-hanging fruit in the comoving-star world with Oh and Price-Whelan.

January 15, 2017

David Hoggunsupervised models of stars

I am very excited these days about the data-driven model of stellar spectra that Megan Bedell (Chicago) and I are building. In its current form, all it does is fit multi-epoch spectra of a single star with three sets of parameters, a normalization level (one per epoch) times a wavelength-by-wavelength spectral model (one parameter per model wavelength) shifted by a Doppler Shift (one per epoch). This very straightforward technology appears to be fitting the spectra to something close to the photon noise limit (which blows me away). The places where it doesn't fit appear to be interesting. Some of them are telluric absorption residuals, and some are intrinsic variations in the lines in the stellar spectra that are sensitive to activity and convection.

Today we talked about scaling this all up; right now we can only do a small part of the spectrum at a time (and we have a few hundred thousand spectral pixels!). We also spoke about how to regress the residuals against velocity or activity. The current plan is to investigate the residuals, but of course if we find anything we should add it in to the generative model and re-start.

David Hogg#hackAAS at #aas229

Today was the (fifth, maybe?) AAS Hack Day; it was also the fifth day of #aas229. As always, I had a great time and great things happened. I won't use this post to list everything from the wrap-up session, but here are some personal, biased highlights:

Inclusive astronomy database
Hlozek, Gidders, Bridge, and Law worked together to create a database and web front-end for resources that astronomers can read (or use) about inclusion and astronomy, inspired in part by things said earlier at #aas229 about race and astronomy. Their system is just a prototype, but it has a few things in it and it is designed to help you find resources but also add resources.
Policy letter help tool
Brett Morris led a hack that created a web interface into which you can input a letter you would like to write to your representative about an issue. It searches for words that are bad to use in policy discussions and asks you to change them, and also gives you the names and addresses of the people to whom you should send it! It was just a prototype, because it turns out there is no way right now to automatically obtain representative names and contact information. That was a frustrating finding about the state of #opengov.
Budget planetarium how-to
Ellie Schwab and a substantial crew got together a budget and resources for building a low-buck but fully functional planetarium. One component was WWT, which is now open source.
Differential equations
Horvat and Galvez worked on solving differential equations using basis functions, to learn (and re-learn) methods that might be applicable to new kinds of models of stars. They built some notebooks that demonstrate that you can easily solve differential equations very accurately with basis functions, but that if you choose a bad basis, you get bad answers!
K2 and the sky
Stephanie Douglas made an interface to the K2 data that show a postage stamp from the data, the light curve, and then aligned (overlaid, even) imaging from other imaging surveys. This involved figuring out some stuff about K2's world coordinate systems, and making it work for the world.
Poster clothing
Once again, the sewing machines were out! I actually own one of these now, just for hack day. Pagnotta led a very successful sewing and knitting crew. Six of the team used a sewing machine for the first time today! In case you are still stuck in 2013: The material for sewing is the posters, which all the cool kids have printed on fabric, not paper these days!
Meta-hack
Erik Tollerud built some tools for the long-term storage and archiving of #hackAAS hacks. These leverage GitHub under the hood.

There were many other hacks, including people learning how to use testing and integration tools, people learning to use the ADS API, people learning how to use version control and GitHub, testing of different kinds of photometry, and visualization of various kinds of data. It was a great day, and I can't wait for next year.

Huge thanks to our corporate sponsor, Northrop Grumman, and my co-organizers Kelle Cruz, Meg Schwamb, and Abigail Stevens. NG provided great food, and Schwamb did a great job helping everyone in the room understand the (constructive, open, friendly, fun) point of the day.

David Hoggconversations

Not much research today, but I did have conversations with Lauren Anderson (Flatiron) about deconvolving the observed (by Gaia TGAS and APASS) color-magnitude diagram of stars, with Leslie Greengard (Flatiron) and Alex Barnett (Dartmouth) about cross-over activities between CCA and CCB at Flatiron, and with Kyle Cranmer (NYU) about his immense NSF proposal.

John BaezThe Irreversible Momentum of Clean Energy

The president of the US recently came out with an article in Science. It’s about climate change and clean energy:

• Barack Obama, The irreversible momentum of clean energy, Science, 13 January 2017.

Since it’s open-access, I’m going to take the liberty of quoting the whole thing, minus the references, which provide support for a lot of his facts and figures.

The irreversible momentum of clean energy

The release of carbon dioxide (CO2) and other greenhouse gases (GHGs) due to human activity is increasing global average surface air temperatures, disrupting weather patterns, and acidifying the ocean. Left unchecked, the continued growth of GHG emissions could cause global average temperatures to increase by another 4°C or more by 2100 and by 1.5 to 2 times as much in many midcontinent and far northern locations. Although our understanding of the impacts of climate change is increasingly and disturbingly clear, there is still debate about the proper course for U.S. policy — a debate that is very much on display during the current presidential transition. But putting near-term politics aside, the mounting economic and scientific evidence leave me confident that trends toward a clean-energy economy that have emerged during my presidency will continue and that the economic opportunity for our country to harness that trend will only grow. This Policy Forum will focus on the four reasons I believe the trend toward clean energy is irreversible.

ECONOMIES GROW, EMISSIONS FALL

The United States is showing that GHG mitigation need not conflict with economic growth. Rather, it can boost efficiency, productivity, and innovation. Since 2008, the United States has experienced the first sustained period of rapid GHG emissions reductions and simultaneous economic growth on record. Specifically, CO2 emissions from the energy sector fell by 9.5% from 2008 to 2015, while the economy grew by more than 10%. In this same period, the amount of energy consumed per dollar of real gross domestic product (GDP) fell by almost 11%, the amount of CO2 emitted per unit of energy consumed declined by 8%, and CO2 emitted per dollar of GDP declined by 18%.

The importance of this trend cannot be overstated. This “decoupling” of energy sector emissions and economic growth should put to rest the argument that combatting climate change requires accepting lower growth or a lower standard of living. In fact, although this decoupling is most pronounced in the United States, evidence that economies can grow while emissions do not is emerging around the world. The International Energy Agency’s (IEA’s) preliminary estimate of energy related CO2 emissions in 2015 reveals that emissions stayed flat compared with the year before, whereas the global economy grew. The IEA noted that “There have been only four periods in the past 40 years in which CO2 emission levels were flat or fell compared with the previous year, with three of those — the early 1980s, 1992, and 2009 — being associated with global economic weakness. By contrast, the recent halt in emissions growth comes in a period of economic growth.”

At the same time, evidence is mounting that any economic strategy that ignores carbon pollution will impose tremendous costs to the global economy and will result in fewer jobs and less economic growth over the long term. Estimates of the economic damages from warming of 4°C over preindustrial levels range from 1% to 5% of global GDP each year by 2100. One of the most frequently cited economic models pins the estimate of annual damages from warming of 4°C at ~4% of global GDP, which could lead to lost U.S. federal revenue of roughly $340 billion to $690 billion annually.

Moreover, these estimates do not include the possibility of GHG increases triggering catastrophic events, such as the accelerated shrinkage of the Greenland and Antarctic ice sheets, drastic changes in ocean currents, or sizable releases of GHGs from previously frozen soils and sediments that rapidly accelerate warming. In addition, these estimates factor in economic damages but do not address the critical question of whether the underlying rate of economic growth (rather than just the level of GDP) is affected by climate change, so these studies could substantially understate the potential damage of climate change on the global macroeconomy.

As a result, it is becoming increasingly clear that, regardless of the inherent uncertainties in predicting future climate and weather patterns, the investments needed to reduce emissions — and to increase resilience and preparedness for the changes in climate that can no longer be avoided — will be modest in comparison with the benefits from avoided climate-change damages. This means, in the coming years, states, localities, and businesses will need to continue making these critical investments, in addition to taking common-sense steps to disclose climate risk to taxpayers, homeowners, shareholders, and customers. Global insurance and reinsurance businesses are already taking such steps as their analytical models reveal growing climate risk.

PRIVATE-SECTOR EMISSIONS REDUCTIONS

Beyond the macroeconomic case, businesses are coming to the conclusion that reducing emissions is not just good for the environment — it can also boost bottom lines, cut costs for consumers, and deliver returns for shareholders.

Perhaps the most compelling example is energy efficiency. Government has played a role in encouraging this kind of investment and innovation. My Administration has put in place (i) fuel economy standards that are net beneficial and are projected to cut more than 8 billion tons of carbon pollution over the lifetime of new vehicles sold between 2012 and 2029 and (ii) 44 appliance standards and new building codes that are projected to cut 2.4 billion tons of carbon pollution and save $550 billion for consumers by 2030.

But ultimately, these investments are being made by firms that decide to cut their energy waste in order to save money and invest in other areas of their businesses. For example, Alcoa has set a goal of reducing its GHG intensity 30% by 2020 from its 2005 baseline, and General Motors is working to reduce its energy intensity from facilities by 20% from its 2011 baseline over the same timeframe. Investments like these are contributing to what we are seeing take place across the economy: Total energy consumption in 2015 was 2.5% lower than it was in 2008, whereas the economy was 10% larger.

This kind of corporate decision-making can save money, but it also has the potential to create jobs that pay well. A U.S. Department of Energy report released this week found that ~2.2 million Americans are currently employed in the design, installation, and manufacture of energy-efficiency products and services. This compares with the roughly 1.1 million Americans who are employed in the production of fossil fuels and their use for electric power generation. Policies that continue to encourage businesses to save money by cutting energy waste could pay a major employment dividend and are based on stronger economic logic than continuing the nearly $5 billion per year in federal fossil-fuel subsidies, a market distortion that should be corrected on its own or in the context of corporate tax reform.

MARKET FORCES IN THE POWER SECTOR

The American electric-power sector — the largest source of GHG emissions in our economy — is being transformed, in large part, because of market dynamics. In 2008, natural gas made up ~21% of U.S. electricity generation. Today, it makes up ~33%, an increase due almost entirely to the shift from higher-emitting coal to lower-emitting natural gas, brought about primarily by the increased availability of low-cost gas due to new production techniques. Because the cost of new electricity generation using natural gas is projected to remain low relative to coal, it is unlikely that utilities will change course and choose to build coal-fired power plants, which would be more expensive than natural gas plants, regardless of any near-term changes in federal policy. Although methane emissions from natural gas production are a serious concern, firms have an economic incentive over the long term to put in place waste-reducing measures consistent with standards my Administration has put in place, and states will continue making important progress toward addressing this issue, irrespective of near-term federal policy.

Renewable electricity costs also fell dramatically between 2008 and 2015: the cost of electricity fell 41% for wind, 54% for rooftop solar photovoltaic (PV) installations, and 64% for utility-scale PV. According to Bloomberg New Energy Finance, 2015 was a record year for clean energy investment, with those energy sources attracting twice as much global capital as fossil fuels.

Public policy — ranging from Recovery Act investments to recent tax credit extensions — has played a crucial role, but technology advances and market forces will continue to drive renewable deployment. The levelized cost of electricity from new renewables like wind and solar in some parts of the United States is already lower than that for new coal generation, without counting subsidies for renewables.

That is why American businesses are making the move toward renewable energy sources. Google, for example, announced last month that, in 2017, it plans to power 100% of its operations using renewable energy — in large part through large-scale, long-term contracts to buy renewable energy directly. Walmart, the nation’s largest retailer, has set a goal of getting 100% of its energy from renewables in the coming years. And economy-wide, solar and wind firms now employ more than 360,000 Americans, compared with around 160,000 Americans who work in coal electric generation and support.

Beyond market forces, state-level policy will continue to drive clean-energy momentum. States representing 40% of the U.S. population are continuing to move ahead with clean-energy plans, and even outside of those states, clean energy is expanding. For example, wind power alone made up 12% of Texas’s electricity production in 2015 and, at certain points in 2015, that number was >40%, and wind provided 32% of Iowa’s total electricity generation in 2015, up from 8% in 2008 (a higher fraction than in any other state).

GLOBAL MOMENTUM

Outside the United States, countries and their businesses are moving forward, seeking to reap benefits for their countries by being at the front of the clean-energy race. This has not always been the case. A short time ago, many believed that only a small number of advanced economies should be responsible for reducing GHG emissions and contributing to the fight against climate change. But nations agreed in Paris that all countries should put forward increasingly ambitious climate policies and be subject to consistent transparency and accountability requirements. This was a fundamental shift in the diplomatic landscape, which has already yielded substantial dividends. The Paris Agreement entered into force in less than a year, and, at the follow-up meeting this fall in Marrakesh, countries agreed that, with more than 110 countries representing more than 75% of global emissions having already joined the Paris Agreement, climate action “momentum is irreversible”. Although substantive action over decades will be required to realize the vision of Paris, analysis of countries’ individual contributions suggests that meeting mediumterm respective targets and increasing their ambition in the years ahead — coupled with scaled-up investment in clean-energy technologies — could increase the international community’s probability of limiting warming to 2°C by as much as 50%.

Were the United States to step away from Paris, it would lose its seat at the table to hold other countries to their commitments, demand transparency, and encourage ambition. This does not mean the next Administration needs to follow identical domestic policies to my Administration’s. There are multiple paths and mechanisms by which this country can achieve — efficiently and economically — the targets we embraced in the Paris Agreement. The Paris Agreement itself is based on a nationally determined structure whereby each country sets and updates its own commitments. Regardless of U.S. domestic policies, it would undermine our economic interests to walk away from the opportunity to hold countries representing two-thirds of global emissions — including China, India, Mexico, European Union members, and others — accountable. This should not be a partisan issue. It is good business and good economics to lead a technological revolution and define market trends. And it is smart planning to set long term emission-reduction targets and give American companies, entrepreneurs, and investors certainty so they can invest and manufacture the emission-reducing technologies that we can use domestically and export to the rest of the world. That is why hundreds of major companies — including energy-related companies from ExxonMobil and Shell, to DuPont and Rio Tinto, to Berkshire Hathaway Energy, Calpine, and Pacific Gas and Electric Company — have supported the Paris process, and leading investors have committed $1 billion in patient, private capital to support clean-energy breakthroughs that could make even greater climate ambition possible.

CONCLUSION

We have long known, on the basis of a massive scientific record, that the urgency of acting to mitigate climate change is real and cannot be ignored. In recent years, we have also seen that the economic case for action — and against inaction — is just as clear, the business case for clean energy is growing, and the trend toward a cleaner power sector can be sustained regardless of near-term federal policies.

Despite the policy uncertainty that we face, I remain convinced that no country is better suited to confront the climate challenge and reap the economic benefits of a low-carbon future than the United States and that continued participation in the Paris process will yield great benefit for the American people, as well as the international community. Prudent U.S. policy over the next several decades would prioritize, among other actions, decarbonizing the U.S. energy system, storing carbon and reducing emissions within U.S. lands, and reducing non-CO2 emissions.

Of course, one of the great advantages of our system of government is that each president is able to chart his or her own policy course. And President-elect Donald Trump will have the opportunity to do so. The latest science and economics provide a helpful guide for what the future may bring, in many cases independent of near-term policy choices, when it comes to combatting climate change and transitioning to a clean energy economy.


January 14, 2017

David Hogg#aas229, day 4

I arrived at the American Astronomical Meeting this morning, just in time (well a few minutes late, actually) for the Special Session on Software organized by Alice Allen (ASCL). There were talks about a range of issues in writing, publishing, and maintaining software in astrophysics. I spoke about software publications (slides here) and software citations. Not only were the ideas in the session diverse, the presenters had a wide range of backgrounds (three of them aren't even astronomers)!

There were many interesting contributions to the session. I was most impressed with the data that people are starting to collect about how software is built, supported, discovered, and used. Along those lines, Iva Momcheva (STScI) showed some great data she took about how software projects are funded and built. This follows great work she did with Erik Tollerud (STScI) on how software is used by astronomers (paper here). In their new work, they find that most software is funded by grants that are not primarily (or in many cases not even secondarily) related to the software, and that most software is written by early-career scientists. These data have great implications for the next decade of astrophysics funding and planning. In the discussion afterwards, there were comments about how hard it is to fund the maintenance of software (something I feel keenly).

Similarly, Mike Hucka (Caltech) showed great results he has on how scientists discover software for use in their research projects (paper here). He finds (surprise!) that documentation is key, but there are many other contributing factors to make a piece of research software more likely to be used or re-used by others. His results have strong implications for developers finishing software projects. One surprising thing is that scientists are less platform-specific or language-specific in their needs than you might think.

I spent part of the afternoon hiding in various locations around the meeting, hacking on an unsupervised data-driven model of stellar spectra with Megan Bedell (Chicago).

January 13, 2017

BackreactionWhat a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say. Such quantum gravitational

Doug NatelsonBrief items

What with the start of the semester and the thick of graduate admissions season, it's been a busy week, so rather than an extensive post, here are some brief items of interest:

  • We are hosting one of the APS Conferences for Undergraduate Women in Physics this weekend.  Welcome, attendees!  It's going to be a good time.
  • This week our colloquium speaker was Jim Kakalios of the University of Minnesota, who gave a very fun talk related to his book The Physics of Superheroes (an updated version of this), as well as a condensed matter seminar regarding his work on charge transport and thermoelectricity in amorphous and nanocrystalline semiconductors.  His efforts at popularizing physics, including condensed matter, are great.  His other books are The Amazing Story of Quantum Mechanics, and the forthcoming The Physics of Everyday Things.  That last one shows how an enormous amount of interesting physics is embedded and subsumed in the routine tasks of modern life - a point I've mentioned before.   
  • Another seminar speaker at Rice this week was John Biggins, who explained the chain fountain (original video here, explanatory video here, relevant paper here).
  • Speaking of videos, here is the talk I gave last April back at the Pittsburgh Quantum Institute's 2016 symposium, and here is the link to all the talks.
  • Speaking of quantum mechanics, here is an article in the NY Review of Books by Steven Weinberg on interpretations of quantum.  While I've seen it criticized online as offering nothing new, I found it to be clearly written and articulated, and that can't always be said for articles about interpretations of quantum mechanics.
  • Speaking of both quantum mechanics interpretations and popular writings about physics, here is John Cramer's review of David Mermin's recent collection of essays, Why Quark Rhymes with Pork:  And other Scientific Diversions (spoiler:  I agree with Cramer that Mermin is wrong on the pronunciation of "quark".)  The review is rather harsh regarding quantum interpretation, though perhaps that isn't surprising given that Cramer has his own view on this.

Chad OrzelPhysics Blogging Round-Up: December

This one’s late because I acquired a second class for the Winter term on very short notice. I was scheduled to teach our sophomore-level “Modern Physics” class, plus the lab, but a colleague who was scheduled to teach relativity for non-majors had a medical issue, and I’m the only other one on staff who’s ever taught it, so now I’m doing two courses instead of one. Whee!

Anyway, here are my December posts from Forbes:

Science Is Not THAT Special: Another in a long series of posts grumbling about the way we set science off from other pursuits and act as if the problems facing it are unique. In reality, a lot of what we talk about as issues of science education are challenges faced by pretty much every other profession as well, with less hand-wringing.

The Surprisingly Complicated Physics of Sliding On Ice: Revisiting that time a couple of years ago when I wrote a bunch about the physics of luge, this time talking about a much more basic question: Why is ice slippery?

“White Rabbit Project” Physics: G-Forces: I had a bunch of conversations with the producers of the new Netflix show “White Rabbit Project” a year or so ago, and some of what we talked about turned into an episode on “g-forces” in acceleration.

ALPHA Experiment Shines New Light On Antimatter: The ALPHA collaboration at CERN has done the first spectroscopy of antihydrogen. It’s pretty rudimentary by the standards of precision measurement folks, but still an important step.

What Should You Expect From Low-Energy Physics In 2017? It’s Hard To Say: I was reading posts about (high-energy) physics news to look for in 2017, and realized I couldn’t write an AMO physics equivalent. So I wrote about why I couldn’t make predictions about my home field.

So there, two weeks into January, is what I wrote about in December. I’ve got a couple of posts up already this month, but we’ll save them for the January recap, which I’ll try to get posted before March. No promises, though, because this extra class has thrown things into disarray…

January 12, 2017

Georg von HippelBook Review: "Lattice QCD — Practical Essentials"

There is a new book about Lattice QCD, Lattice Quantum Chromodynamics: Practical Essentials by Francesco Knechtli, Michael Günther and Mike Peardon. At a 140 pages, this is a pretty slim volume, so it is obvious that it does not aim to displace time-honoured introductory textbooks like Montvay and Münster, or the newer books by Gattringer and Lang or DeGrand and DeTar. Instead, as suggested by the subtitle "Practical Essentials", and as said explicitly by the authors in their preface, this book aims to prepare beginning graduate students for their practical work in generating gauge configurations and measuring and analysing correlators.

In line with this aim, the authors spend relatively little time on the physical or field theoretic background; while some more advanced topics such as the Nielson-Ninomiya theorem and the Symanzik effective theory or touched upon, the treatment of foundational topics is generally quite brief, and some topics, such as lattice perturbation theory or non-perturbative renormalization, are altogether omitted. The focus of the book is on Monte Carlo simulations, for which both the basic ideas and practically relevant algorithms — heatbath and overrelaxation fro pure gauge fields, and hybrid Monte Carlo for dynamical fermions — are described in some detail, including the RHMC algorithm and advanced techniques such as determinant factorizations, higher-order symplectic integrators, and multiple-timescale integration. The techniques from linear algebra required to deal with fermions are also covered in some detail, from the basic ideas of Krylov space methods through concrete descriptions of the GMRES and CG algorithms, along with such important preconditioners as even-odd and domain decomposition, to the ideas of algebraic multigrid methods. Stochastic estimation of all-to-all propagators with dilution, the one-end trick and low-mode averaging and explained, as are techniques for building interpolating operators with specific quantum numbers, gauge link and quark field smearing, and the use of the variational method to extract hadronic mass spectra. Scale setting, the Wilson flow, and Lüscher's method for extracting scattering phase shifts are also discussed briefly, as are the basic statistical techniques for data analysis. Each chapter contains a list of references to the literature covering both original research articles and reviews and textbooks for further study.

Overall, I feel that the authors succeed very well at their stated aim of giving a quick introduction to the methods most relevant to current research in lattice QCD in order to let graduate students hit the ground running and get to perform research as quickly as possible. In fact, I am slightly worried that they may turn out to be too successful, since a graduate student having studied only this book could well start performing research, while having only a very limited understanding of the underlying field-theoretical ideas and problems (a problem that already exists in our field in any case). While this in no way detracts from the authors' achievement, and while I feel I can recommend this book to beginners, I nevertheless have to add that it should be complemented by a more field-theoretically oriented traditional textbook for completeness.

___
Note that I have deliberately not linked to the Amazon page for this book. Please support your local bookstore — nowadays, you can usually order online on their websites, and many bookstores are more than happy to ship books by post.

Scott AaronsonQuantum computing news (98% Trump-free)

(1) Apparently Microsoft has decided to make a major investment in building topological quantum computers, which will include hiring Charles Marcus and Matthias Troyer among others.  See here for their blog post, and here for the New York Times piece.  In the race to implement QC among the established corporate labs, Microsoft thus joins the Martinis group at Google, as well as the IBM group at T. J. Watson—though both Google and IBM are focused on superconducting qubits, rather than the more exotic nonabelian anyon approach that Microsoft has long favored and is now doubling down on.  I don’t really know more about this new initiative beyond what’s in the articles, but I know many of the people involved, they’re some of the most serious in the business, and Microsoft intensifying its commitment to QC can only be good for the field.  I wish the new effort every success, despite being personally agnostic between superconducting qubits, trapped ions, photonics, nonabelian anyons, and other QC hardware proposals—whichever one gets there first is fine with me!


(2) For me, though, perhaps the most exciting QC development of the last month was a new preprint by my longtime friend Dorit Aharonov and her colleague Yosi Atia, entitled Fast-Forwarding of Hamiltonians and Exponentially Precise Measurements.  In this work, Dorit and Yosi wield the clarifying sword of computational complexity at one of the most historically confusing issues in quantum mechanics: namely, the so-called “time-energy uncertainty principle” (TEUP).

The TEUP says that, just as position and momentum are conjugate in quantum mechanics, so too are energy and time—with greater precision in energy corresponding to lesser precision in time and vice versa.  The trouble is, it was never 100% clear what the TEUP even meant—after all, time isn’t even an observable in quantum mechanics, just an external parameter—and, to whatever extent the TEUP did have a definite meaning, it wasn’t clear that it was true.  Indeed, as Dorit and Yosi’s paper discusses in detail, in 1961 Dorit’s uncle Yakir Aharonov, together with David Bohm, gave a counterexample to a natural interpretation of the TEUP.  But, despite this and other counterexamples, the general feeling among physicists—who, after all, are physicists!—seems to have been that some corrected version of the TEUP should hold “in all reasonable circumstances.”

But, OK, what do we mean by a “reasonable circumstance”?  This is where Dorit and Yosi come in.   In the new work, they present a compelling case that the TEUP should really be formulated as a tradeoff between the precision of energy measurements and circuit complexity (that is, the minimum number of gates needed to implement the energy measurement)—and in that amended form, the TEUP holds for exactly those Hamiltonians H that can’t be “computationally fast-forwarded.”  In other words, it holds whenever applying the unitary transformation e-iHt requires close to t computation steps, when there’s no magical shortcut that lets you simulate t steps of time evolution with only (say) log(t) steps.  And, just as the physicists handwavingly thought, that should indeed hold for “generic” Hamiltonians H (assuming BQP≠PSPACE), although it’s possible to use Shor’s algorithm, for finding the order of an element in a multiplicative group, to devise a counterexample to it.

Anyway, there’s lots of other stuff in the paper, including a connection to the stuff Lenny Susskind and I have been doing about the “generic” growth of circuit complexity, in the CFT dual of an expanding wormhole (where we also needed to assume BQP≠PSPACE and closely related complexity separations, for much the same reasons).  Congratulations to Dorit and Yosi for once again illustrating the long reach of computational complexity in physics, and for giving me a reason to be happy this month!


(3) As many of you will have seen, my former MIT colleagues, Lior Eldar and Peter Shor, recently posted an arXiv preprint claiming a bombshell result: namely, a polynomial-time quantum algorithm to solve a variant of the Closest Vector Problem in lattices.  Their claimed algorithm wouldn’t yet break lattice-based cryptography, but if the approximation factors could be improved, it would be well on the way to doing so.  This has been one of the most tempting targets for quantum algorithms research for more than twenty years—ever since Shor’s “original” algorithm laid waste to RSA, Diffie-Hellman, elliptic-curve cryptography, and more in a world with scalable quantum computers, leaving lattice-based cryptography as one of the few public-key crypto proposals still standing.

Unfortunately, Lior tells me that Oded Regev has discovered a flaw in the algorithm, which he and Peter don’t currently know how to fix.  So for now, they’re withdrawing the paper (because of the Thanksgiving holiday, the withdrawal won’t take effect on the arXiv until Monday).  It’s still a worthy attempt on a great problem—here’s hoping that they or someone else manage to, as Lior put it to me, “make the algorithm great again.”

Scott AaronsonThe teaser

Tomorrow, I’ll have something big to announce here.  So, just to whet your appetites, and to get myself back into the habit of blogging, I figured I’d offer you an appetizer course: some more miscellaneous non-Trump-related news.


(1) My former student Leonid Grinberg points me to an astonishing art form, which I somehow hadn’t known about: namely, music videos generated by executable files that fit in only 4K of memory.  Some of these videos have to be seen to be believed.  (See also this one.)  Much like, let’s say, a small Turing machine whose behavior is independent of set theory, these videos represent exercises in applied (or, OK, recreational) Kolmogorov complexity: how far out do you need to go in the space of all computer programs before you find beauty and humor and adaptability and surprise?

Admittedly, Leonid explains to me that the rules allow these programs to call DirectX and Visual Studio libraries to handle things like the 3D rendering (with the libraries not counted toward the 4K program size).  This makes the programs’ existence merely extremely impressive, rather than a sign of alien superintelligence.

In some sense, all the programming enthusiasts over the decades who’ve burned their free time and processor cycles on Conway’s Game of Life and the Mandelbrot set and so forth were captivated by the same eerie beauty showcased by the videos: that of data compression, of the vast unfolding of a simple deterministic rule.  But I also feel like the videos add a bit extra: the 3D rendering, the music, the panning across natural or manmade-looking dreamscapes.  What we have here is a wonderful resource for either an acid trip or an undergrad computability and complexity course.


(2) A week ago Igor Oliveira, together with my longtime friend Rahul Santhanam, released a striking paper entitled Pseudodeterministic Constructions in Subexponential Time.  To understand what this paper does, let’s start with Terry Tao’s 2009 polymath challenge: namely, to find a fast, deterministic method that provably generates large prime numbers.  Tao’s challenge still stands today: one of the most basic, simplest-to-state unsolved problems in algorithms and number theory.

To be clear, we already have a fast deterministic method to decide whether a given number is prime: that was the 2002 breakthrough by Agrawal, Kayal, and Saxena.  We also have a fast probabilistic method to generate large primes: namely, just keep picking n-digit numbers at random, test each one, and stop when you find one that’s prime!  And those methods can be made deterministic assuming far-reaching conjectures in number theory, such as Cramer’s Conjecture (though note that even the Riemann Hypothesis wouldn’t lead to a polynomial-time algorithm, but “merely” a faster exponential-time one).

But, OK, what if you want a 5000-digit prime number, and you want it now: provably, deterministically, and fast?  That was Tao’s challenge.  The new paper by Oliveira and Santhanam doesn’t quite solve it, but it makes some exciting progress.  Specifically, it gives a deterministic algorithm to generate n-digit prime numbers, with merely the following four caveats:

  • The algorithm isn’t polynomial time, but subexponential (2n^o(1)) time.
  • The algorithm isn’t deterministic, but pseudodeterministic (a concept introduced by Gat and Goldwasser).  That is, the algorithm uses randomness, but it almost always succeeds, and it outputs the same n-digit prime number in every case where it succeeds.
  • The algorithm might not work for all input lengths n, but merely for infinitely many of them.
  • Finally, the authors can’t quite say what the algorithm is—they merely prove that it exists!  If there’s a huge complexity collapse, such as ZPP=PSPACE, then the algorithm is one thing, while if not then the algorithm is something else.

Strikingly, Oliveira and Santhanam’s advance on the polymath problem is pure complexity theory: hitting sets and pseudorandom generators and win-win arguments and stuff like that.  Their paper uses absolutely nothing specific to the prime numbers, except the facts that (a) there are lots of them (the Prime Number Theorem), and (b) we can efficiently decide whether a given number is prime (the AKS algorithm).  It seems almost certain that one could do better by exploiting more about primes.


(3) I’m in Lyon, France right now, to give three quantum computing and complexity theory talks.  I arrived here today from London, where I gave another two lectures.  So far, the trip has been phenomenal, my hosts gracious, the audiences bristling with interesting questions.

But getting from London to Lyon also taught me an important life lesson that I wanted to share: never fly EasyJet.  Or at least, if you fly one of the European “discount” airlines, realize that you get what you pay for (I’m told that Ryanair is even worse).  These airlines have a fundamentally dishonest business model, based on selling impossibly cheap tickets, but then forcing passengers to check even tiny bags and charging exorbitant fees for it, counting on snagging enough travelers who just naïvely clicked “yes” to whatever would get them from point A to point B at a certain time, assuming that all airlines followed more-or-less similar rules.  Which might not be so bad—it’s only money—if the minuscule, overworked staff of these quasi-airlines didn’t also treat the passengers like beef cattle, barking orders and berating people for failing to obey rules that one could log hundreds of thousands of miles on normal airlines without ever once encountering.  Anyway, if the airlines won’t warn you, then Shtetl-Optimized will.

Scott Aaronson“THE TALK”: My quantum computing cartoon with Zach Weinersmith

OK, here’s the big entrée that I promised you yesterday:

“THE TALK”: My joint cartoon about quantum comgputing with Zach Weinersmith of SMBC Comics.

Just to whet your appetite:

In case you’re wondering how this came about: after our mutual friend Sean Carroll introduced me and Zach for a different reason, the idea of a joint quantum computing comic just seemed too good to pass up.  The basic premise—“The Talk”—was all Zach.  I dutifully drafted some dialogue for him, which he then improved and illustrated.  I.e., he did almost all the work (despite having a newborn competing for his attention!).  Still, it was an honor for me to collaborate with one of the great visual artists of our time, and I hope you like the result.  Beyond that, I’ll let the work speak for itself.

Scott AaronsonMy 5-minute quantum computing talk at the White House

(OK, technically it was in the Eisenhower Executive Office Building, which is not exactly the White House itself, but is adjacent to the West Wing in the White House complex.  And President Obama wasn’t there—maybe, like Justin Trudeau, he already knows everything about quantum computing?  But lots of people from the Office of Science and Technology Policy were!  And some of us talked with Valerie Jarrett, Obama’s adviser, when she passed us on her way to the West Wing.

The occasion was a Quantum Information Science policy workshop that OSTP held, and which the White House explicitly gave us permission to discuss on social media.  Indeed, John Preskill already tweeted photos from the event.  Besides me and Preskill, others in attendance included Umesh Vazirani, Seth Lloyd, Yaoyun Shi, Rob Schoelkopf, Krysta Svore, Hartmut Neven, Stephen Jordan…

I don’t know whether this is the first time that the polynomial hierarchy, or the notion of variation distance, were ever invoked in a speech at the White House.  But in any case, I was proud to receive a box of Hershey Kisses bearing the presidential seal.  I thought of not eating them, but then I got hungry, and realized that I can simply refill the box later if desired.

For regular readers of Shtetl-Optimized, my talk won’t have all that much that’s new, but in any case it’s short.

Incidentally, during the workshop, a guy from OSTP told me that, when he and others at the White House were asked to prepare materials about quantum computing, posts on Shtetl-Optimized (such as Shor I’ll Do It) were a huge help.  Honored though I was to have “served my country,” I winced, thinking about all the puerile doofosities I might’ve self-censored had I had any idea who might read them.  I didn’t dare ask whether anyone at the White House also reads the comment sections!

Thanks so much to all the other participants and to the organizers for a great workshop.  –SA)


Quantum Supremacy

by Scott Aaronson (UT Austin)

October 18, 2016

Thank you; it’s great to be here.  There are lots of directions that excite me enormously right now in quantum computing theory, which is what I work on.  For example, there’s the use of quantum computing to get new insight into classical computation, into condensed matter physics, and recently, even into the black hole information problem.

But since I have five minutes, I wanted to talk here about one particular direction—one that, like nothing else that I know of, bridges theory and experiment in the service of what we hope will be a spectacular result in the near future.  This direction is what’s known as “Quantum Supremacy”—John [Preskill], did you help popularize that term?  [John nods yes]—although some people have been backing away from the term recently, because of the campaign of one of the possible future occupants of this here complex.

But what quantum supremacy means to me, is demonstrating a quantum speedup for some task as confidently as possible.  Notice that I didn’t say a useful task!  I like to say that for me, the #1 application of quantum computing—more than codebreaking, machine learning, or even quantum simulation—is just disproving the people who say quantum computing is impossible!  So, quantum supremacy targets that application.

What is important for quantum supremacy is that we solve a clearly defined problem, with some relationship between inputs and outputs that’s independent of whatever hardware we’re using to solve the problem.  That’s part of why it doesn’t cut it to point to some complicated, hard-to-simulate molecule and say “aha!  quantum supremacy!”

One discovery, which I and others stumbled on 7 or 8 years ago, is that quantum supremacy seems to become much easier to demonstrate if we switch from problems with a single valid output to sampling problems: that is, problems of sampling exactly or approximately from some specified probability distribution.

Doing this has two advantages.  First, we no longer need a full, fault-tolerant quantum computer—in fact, very rudimentary types of quantum hardware appear to suffice.  Second, we can design sampling problems for which we can arguably be more confident that they really are hard for a classical computer, than we are that (say) factoring is classically hard.  I like to say that a fast classical factoring algorithm might collapse the world’s electronic commerce, but as far as we know, it wouldn’t collapse the polynomial hierarchy!  But with sampling problems, at least with exact sampling, we can often show the latter implication, which is about the best evidence you can possibly get for such a problem being hard in the present state of mathematics.

One example of these sampling tasks that we think are classically hard is BosonSampling, which Alex Arkhipov and I proposed in 2011.  BosonSampling uses a bunch of identical photons that are sent through a network of beamsplitters, then measured to count the number of photons in each output mode.  Over the past few years, this proposal has been experimentally demonstrated by quantum optics groups around the world, with the current record being a 6-photon demonstration by the O’Brien group in Bristol, UK.  A second example is the IQP (“Instantaneous Quantum Polynomial-Time”) or Commuting Hamiltonians model of Bremner, Jozsa, and Shepherd.

A third example—no doubt the simplest—is just to sample from the output distribution of a random quantum circuit, let’s say on a 2D square lattice of qubits with nearest-neighbor interactions.  Notably, this last task is one that the Martinis group at Google is working toward achieving right now, with 40-50 qubits.  They say that they’ll achieve it in as little as one or two years, which translated from experimental jargon, means maybe five years?  But not infinity years.

The challenges on the experimental side are clear: get enough qubits with long enough coherence times to achieve this.  But there are also some huge theoretical challenges remaining.

A first is, can we still solve classically hard sampling problems even in the presence of realistic experimental imperfections?  Arkhipov and I already thought about that problem—in particular, about sampling from a distribution that’s merely close in variation distance to the BosonSampling one—and got results that admittedly weren’t as satisfactory as the results for exact sampling.  But I’m delighted to say that, just within the last month or two, there have been some excellent new papers on the arXiv that tackle exactly this question, with both positive and negative results.

A second theoretical challenge is, how do we verify the results of a quantum supremacy experiment?  Note that, as far as we know today, verification could itself require classical exponential time.  But that’s not the showstopper that some people think, since we could target the “sweet spot” of 40-50 qubits, where classical verification is difficult (and in particular, clearly “costlier” than running the experiment itself), but also far from impossible with cluster computing resources.

If I have any policy advice, it’s this: recognize that a clear demonstration of quantum supremacy is at least as big a deal as (say) the discovery of the Higgs boson.  After this scientific milestone is achieved, I predict that the whole discussion of commercial applications of quantum computing will shift to a new plane, much like the Manhattan Project shifted to a new plane after Fermi built his pile under the Chicago stadium in 1942.  In other words: at this point, the most “applied” thing to do might be to set applications aside temporarily, and just achieve this quantum supremacy milestone—i.e., build the quantum computing Fermi pile—and thereby show the world that quantum computing speedups are a reality.  Thank you.

Scott AaronsonTime to vote-swap

I blogged about anti-Trump vote-swapping before (and did an interview at Huffington Post with Linchuan Zhang), but now, for my most in-depth look at the topic yet, check out my podcast interview with the incomparable Julia Galef, of “Rationally Speaking.”  Or if you’re bothered by my constant uhs and y’knows, I strongly recommend reading the transcript instead—I always sound smarter in print.

But don’t just read, act!  With only 9 days until the election, and with Hillary ahead but the race still surprisingly volatile, if you live in a swing state and support Gary Johnson or Jill Stein or Evan McMullin (but you nevertheless correctly regard Trump as the far greater evil than Hillary), or if you live in a relatively safe state and support Hillary (like I do), now is the time to find your vote-swap partner.  Remember that you and your partner can always back out later, by mutual consent, if the race changes (e.g., my vote-swap partner in Ohio has “released” me to vote for Hillary rather than Gary Johnson if, the night before Election Day, Texas looks like it might actually turn blue).

Just one thing: I recently got a crucial piece of intelligence about vote-swapping, which is to use the site TrumpTraders.org.  Previously, I’d been pointing people to another site called MakeMineCount.org, but my informants report that they never actually get assigned a match on that site, whereas they do right away on TrumpTraders.

Update (Nov. 6): Linchuan Zhang tells me that TrumpTraders.org currently has a deficit of several thousand Clinton supporters in safe states.  So if you’re such a person and you haven’t vote-swapped yet, please go there ASAP!

I’ve already voted for Gary Johnson in Texas, having “teleported” my Clinton vote to Ohio.  While Clinton’s position is stronger, it seems clear that the election will indeed be close, and Texas will not be in serious contention.

January 11, 2017

Terence TaoSome remarks on the lonely runner conjecture

I’ve just uploaded to the arXiv my paper “Some remarks on the lonely runner conjecture“, submitted to Contributions to discrete mathematics. I had blogged about the lonely runner conjecture in this previous blog post, and I returned to the problem recently to see if I could obtain anything further. The results obtained were more modest than I had hoped, but they did at least seem to indicate a potential strategy to make further progress on the problem, and also highlight some of the difficulties of the problem.

One can rephrase the lonely runner conjecture as the following covering problem. Given any integer “velocity” {v} and radius {0 < \delta < 1/2}, define the Bohr set {B(v,\delta)} to be the subset of the unit circle {{\bf R}/{\bf Z}} given by the formula

\displaystyle B(v,\delta) := \{ t \in {\bf R}/{\bf Z}: \|vt\| \leq \delta \},

where {\|x\|} denotes the distance of {x} to the nearest integer. Thus, for {v} positive, {B(v,\delta)} is simply the union of the {v} intervals {[\frac{a-\delta}{v}, \frac{a+\delta}{v}]} for {a=0,\dots,v-1}, projected onto the unit circle {{\bf R}/{\bf Z}}; in the language of the usual formulation of the lonely runner conjecture, {B(v,\delta)} represents those times in which a runner moving at speed {v} returns to within {\delta} of his or her starting position. For any non-zero integers {v_1,\dots,v_n}, let {\delta(v_1,\dots,v_n)} be the smallest radius {\delta} such that the {n} Bohr sets {B(v_1,\delta),\dots,B(v_n,\delta)} cover the unit circle:

\displaystyle {\bf R}/{\bf Z} = \bigcup_{i=1}^n B(v_i,\delta). \ \ \ \ \ (1)

 

Then define {\delta_n} to be the smallest value of {\delta(v_1,\dots,v_n)}, as {v_1,\dots,v_n} ranges over tuples of distinct non-zero integers. The Dirichlet approximation theorem quickly gives that

\displaystyle \delta(1,\dots,n) = \frac{1}{n+1}

and hence

\displaystyle \delta_n \leq \frac{1}{n+1}

for any {n \geq 1}. The lonely runner conjecture is equivalent to the assertion that this bound is in fact optimal:

Conjecture 1 (Lonely runner conjecture) For any {n \geq 1}, one has {\delta_n = \frac{1}{n+1}}.

This conjecture is currently known for {n \leq 6} (see this paper of Barajas and Serra), but remains open for higher {n}.

It is natural to try to attack the problem by establishing lower bounds on the quantity {\delta_n}. We have the following “trivial” bound, that gets within a factor of two of the conjecture:

Proposition 2 (Trivial bound) For any {n \geq 1}, one has {\delta_n \geq \frac{1}{2n}}.

Proof: It is not difficult to see that for any non-zero velocity {v} and any {0 < \delta < 1/2}, the Bohr set {B(v,\delta)} has Lebesgue measure {m(B(v,\delta)) = 2\delta}. In particular, by the union bound

\displaystyle m(\bigcup_{i=1}^n B(v_i,\delta)) \leq \sum_{i=1}^n m(B(v_i,\delta)) \ \ \ \ \ (2)

 

we see that the covering (1) is only possible if {1 \leq 2 n \delta}, giving the claim. \Box

So, in some sense, all the difficulty is coming from the need to improve upon the trivial union bound (2) by a factor of two.

Despite the crudeness of the union bound (2), it has proven surprisingly hard to make substantial improvements on the trivial bound {\delta_n \geq \frac{1}{2n}}. In 1994, Chen obtained the slight improvement

\displaystyle \delta_n \geq \frac{1}{2n - 1 + \frac{1}{2n-3}}

which was improved a little by Chen and Cusick in 1999 to

\displaystyle \delta_n \geq \frac{1}{2n-3}

when {2n-3} was prime. In a recent paper of Perarnau and Serra, the bound

\displaystyle \delta_n \geq \frac{1}{2n-2+o(1)}

was obtained for arbitrary {n}. These bounds only improve upon the trivial bound by a multiplicative factor of {1+O(1/n)}. Heuristically, one reason for this is as follows. The union bound (2) would of course be sharp if the Bohr sets {B(v_i,\delta)} were all disjoint. Strictly speaking, such disjointness is not possible, because all the Bohr sets {B(v_i,\delta)} have to contain the origin as an interior point. However, it is possible to come up with a large number of Bohr sets {B(v_i,\delta)} which are almost disjoint. For instance, suppose that we had velocities {v_1,\dots,v_s} that were all prime numbers between {n/4} and {n/2}, and that {\delta} was equal to {\delta_n} (and in particular was between {1/2n} and {1/(n+1)}. Then each set {B(v_i,\delta)} can be split into a “kernel” interval {[-\frac{\delta}{v_i}, \frac{\delta}{v_i}]}, together with the “petal” intervals {\bigcup_{a=1}^{v_i-1} [\frac{a-\delta}{v_i}, \frac{a+\delta}{v_i}]}. Roughly speaking, as the prime {v_i} varies, the kernel interval stays more or less fixed, but the petal intervals range over disjoint sets, and from this it is not difficult to show that

\displaystyle m(\bigcup_{i=1}^s B(v_i,\delta)) = (1-O(\frac{1}{n})) \sum_{i=1}^s m(B(v_i,\delta)),

so that the union bound is within a multiplicative factor of {1+O(\frac{1}{n})} of the truth in this case.

This does not imply that {\delta_n} is within a multiplicative factor of {1+O(1/n)} of {\frac{1}{2n}}, though, because there are not enough primes between {n/4} and {n/2} to assign to {n} distinct velocities; indeed, by the prime number theorem, there are only about {\frac{n}{4\log n}} such velocities that could be assigned to a prime. So, while the union bound could be close to tight for up to {\asymp n/\log n} Bohr sets, the above counterexamples don’t exclude improvements to the union bound for larger collections of Bohr sets. Following this train of thought, I was able to obtain a logarithmic improvement to previous lower bounds:

Theorem 3 For sufficiently large {n}, one has {\delta_n \geq \frac{1}{2n} + \frac{c \log n}{n^2 (\log\log n)^2}} for some absolute constant {c>0}.

The factors of {\log\log n} in the denominator are for technical reasons and might perhaps be removable by a more careful argument. However it seems difficult to adapt the methods to improve the {\log n} in the numerator, basically because of the obstruction provided by the near-counterexample discussed above.

Roughly speaking, the idea of the proof of this theorem is as follows. If we have the covering (1) for {\delta} very close to {1/2n}, then the multiplicity function {\sum_{i=1}^n 1_{B(v_i,\delta)}} will then be mostly equal to {1}, but occasionally be larger than {1}. On the other hand, one can compute that the {L^2} norm of this multiplicity function is significantly larger than {1} (in fact it is at least {(3/2-o(1))^{1/2}}). Because of this, the {L^3} norm must be very large, which means that the triple intersections {B(v_i,\delta) \cap B(v_j,\delta) \cap B(v_k,\delta)} must be quite large for many triples {(i,j,k)}. Using some basic Fourier analysis and additive combinatorics, one can deduce from this that the velocities {v_1,\dots,v_n} must have a large structured component, in the sense that there exists an arithmetic progression of length {\asymp n} that contains {\asymp n} of these velocities. For simplicity let us take the arithmetic progression to be {\{1,\dots,n\}}, thus {\asymp n} of the velocities {v_1,\dots,v_n} lie in {\{1,\dots,n\}}. In particular, from the prime number theorem, most of these velocities will not be prime, and will in fact likely have a “medium-sized” prime factor (in the precise form of the argument, “medium-sized” is defined to be “between {\log^{10} n} and {n^{1/10}}“). Using these medium-sized prime factors, one can show that many of the {B(v_i,\delta)} will have quite a large overlap with many of the other {B(v_j,\delta)}, and this can be used after some elementary arguments to obtain a more noticeable improvement on the union bound (2) than was obtained previously.

A modification of the above argument also allows for the improved estimate

\displaystyle \delta(v_1,\dots,v_n) \geq \frac{1+c-o(1)}{2n} \ \ \ \ \ (3)

 

if one knows that all of the velocities {v_1,\dots,v_n} are of size {O(n)}.

In my previous blog post, I showed that in order to prove the lonely runner conjecture, it suffices to do so under the additional assumption that all of the velocities {v_1,\dots,v_n} are of size {O(n^{O(n^2)})}; I reproduce this argument (slightly cleaned up for publication) in the current preprint. There is unfortunately a huge gap between {O(n)} and {O(n^{O(n^2)})}, so the above bound (3) does not immediately give any new bounds for {\delta_n}. However, one could perhaps try to start attacking the lonely runner conjecture by increasing the range {O(n)} for which one has good results, and by decreasing the range {O(n^{O(n^2)})} that one can reduce to. For instance, in the current preprint I give an elementary argument (using a certain amount of case-checking) that shows that the lonely runner bound

\displaystyle \delta(v_1,\dots,v_n) \geq \frac{1}{n+1} \ \ \ \ \ (4)

 

holds if all the velocities {v_1,\dots,v_n} are assumed to lie between {1} and {1.2 n}. This upper threshold of {1.2 n} is only a tiny improvement over the trivial threshold of {n}, but it seems to be an interesting sub-problem of the lonely runner conjecture to increase this threshold further. One key target would be to get up to {2n}, as there are actually a number of {n}-tuples {(v_1,\dots,v_n)} in this range for which (4) holds with equality. The Dirichlet approximation theorem of course gives the tuple {(1,2,\dots,n)}, but there is also the double {(2,4,\dots,2n)} of this tuple, and furthermore there is an additional construction of Goddyn and Wong that gives some further examples such as {(1,2,3,4,5,7,12)}, or more generally one can start with the standard tuple {(1,\dots,n)} and accelerate one of the velocities {v} to {2v}; this turns out to work as long as {v} shares a common factor with every integer between {n-v+1} and {2n-2v+1}. There are a few more examples of this type in the paper of Goddyn and Wong, but all of them can be placed in an arithmetic progression of length {O(n \log n)} at most, so if one were very optimistic, one could perhaps envision a strategy in which the upper bound of {O(n^{O(n^2)})} mentioned earlier was reduced all the way to something like {O( n \log n )}, and then a separate argument deployed to treat this remaining case, perhaps isolating the constructions of Goddyn and Wong (and possible variants thereof) as the only extreme cases.


Filed under: math.CO, math.NT Tagged: Bohr sets, Diophantine approximation, lonely runner conjecture

n-Category Café Category Theory in Barcelona

I’m excited to be in Barcelona to help Joachim Kock teach an introductory course on category theory. (That’s a link to bgsmath.cat — categorical activities in Catalonia have the added charm of a .cat web address.) We have a wide audience of PhD and masters students, specializing in subjects from topology to operator algebras to number theory, and representing three Barcelona universities.

We’re taking it at a brisk pace. First of all we’re working through my textbook, at a rate of one chapter a day, for six days spread over two weeks. Then we’re going to spend a week on more advanced topics. Today Joachim did Chapter 1 (categories, functors and natural transformations), and tomorrow I’ll do Chapter 2 (adjunctions).

I’d like to use this post for two things: to invite questions and participation from the audience, and to collect slogans. Let me explain…

Joachim pointed out today that category theory is full of slogans. Here’s the first one:

It’s more important how things interact than what they “are”.

As he observed, the question of what things “are” is slippery. Let me quote a bit from my book:

In his excellent book Mathematics: A Very Short Introduction, Timothy Gowers considers the question: “What is the black king in chess?”. He swiftly points out that this question is rather peculiar. It is not important that the black king is a small piece of wood, painted a certain colour and carved into a certain shape. We could equally well use a scrap of paper with “BK” written on it. What matters is what the black king does: it can move in certain ways but not others, according to the rules of chess.

In a categorical context, what an object “does” means how it interacts with the world around it — the category in which it lives.

Tomorrow I’ll proclaim some more slogans — I have some in mind. But I’d like to hear from you too. What are the most important slogans in category theory? And what do they mean to you?

I’d also like to try an experiment. The classes move rather quickly, so there’s not a huge amount of time in them for discussion or questions. But I’d like to invite students in the class to ask questions here. You can post anonymously — no one will know it’s you — and with any luck, you’ll get interesting answers from multiple points of view. So please, don’t be inhibited: ask whatever’s on your mind. You can even include LaTeX, in more or less the usual way: just put stuff between dollar signs. No tinguis por!

January 09, 2017

Tommaso DorigoGetting Married

I am happy to report, with this rather unconventional blog posting, that I am getting married on January 12. My companion is Kalliopi Petrou, a lyrical singer. There will be no huge party involved in the event, as Kalliopi and I have lived together for some time already and the ceremony will be minimalistic. None the less, we do give importance to this common decision, so much so that I thought it would be a good thing to broadcast in public - here.

read more

January 08, 2017

Doug NatelsonPhysics is not just high energy and astro/cosmology.

A belated happy new year to my readers.  Back in 2005, nearly every popularizer of physics on the web, television, and bookshelves was either a high energy physicist (mostly theorists) or someone involved in astrophysics/cosmology.  Often these people were presented, either deliberately or through brevity, as representing the whole discipline of physics.  Things have improved somewhat, but the overall situation in the media today is not that different, as exemplified by the headline of this article, and noticed by others (see the fourth paragraph here, at the excellent blog by Ross McKenzie).

For example, consider Edge.org, which has an annual question that they put to "the most complex and sophisticated minds".   This year the question was, what scientific term or concept should be more widely known?  It's a very interesting piece, and I encourage you to read it.  They got responses from 206 contributors (!).   By my estimate, about 31 of those would likely say that they are active practicing physicists, though definitions get tricky for people working on "complexity" and computation.  Again, by my rough count, from that list I see 12-14 high energy theorists (depending on whether you count Yuri Milner, who is really a financier, or Gino Segre, who is an excellent author but no longer an active researcher) including Sabine Hossenfelder, one high energy experimentalist, 10 people working on astrophysics/cosmology, four working on some flavor of quantum mechanics/quantum information (including the blogging Scott Aronson), one on biophysics/complexity, and at most two on condensed matter physics.   Seems to me like representation here is a bit skewed.  

Hopefully we will keep making progress on conveying that high energy/cosmology is not representative of the entire discipline of physics....



BackreactionStephen Hawking turns 75. Congratulations! Here’s what to celebrate.

If people know anything about physics, it’s the guy in a wheelchair who speaks with a computer. Google “most famous scientist alive” and the answer is “Stephen Hawking.” But if you ask a physicist, what exactly is he famous for? Hawking became “officially famous” with his 1988 book “A Brief History of Time.” Among physicists, however, he’s more renowned for the singularity theorems. In his

January 07, 2017

Tommaso DorigoThe Three Cubes Problem

Two days ago, before returning from Israel, my fiancee Kalliopi and I had a very nice dinner in a kosher restaurant near Rehovot in the company of Eilam Gross, Zohar Komargodski, and Zohar's wife Olga. 
The name of Eilam should be familiar to regulars of this blog as he wrote a couple of guest posts here, in similar occasions (in the first case it was a few before the Higgs discovery was announced, when the signal was intriguing but not yet decisive; and in the second case it was about the 750 GeV resonance, which unfortunately did not concretize into a discovery). As for Zohar, he is a brilliant theorist working in applications of quantum field theory. He is young but already won several awards, among them the prestigious New Horizons in Physics prize.

read more

January 05, 2017

Jordan EllenbergBooklist 2016 — the year of translation

This year my reading project was for the majority of the books I read to be translated from a language other than English.  Here’s the list:

  • 31 Dec 2016:  Troubling Love, by Elena Ferrante (Ann Goldstein, trans.)
  • 27 Dec 2016:  The Civil Servant’s Notebook, by Wang Xiaofang (Eric Abrahamsen, trans.)
  • 16 Dec 2016:  Nirmala, by Premchand (David Rubin, trans.)
  • 16 Dec 2016:  A Long Walk to Water, by Linda Sue Park
  • 1 Dec 2016:  Nabokov’s Favorite Word is Mauve, by Ben Blatt
  • 24 Nov 2016: HHhH, by Laurent Binet (Sam Taylor, trans.)
  • 21 Nov 2016:  Secondhand Time, by Svetlana Alexievich (Bela Shayevich, trans.)
  • 20 Nov 2016:  Twenty-Four Hours in the Life of a Woman, by Stefan Zweig (Anthea Bell, trans.)
  • 6 Nov 2016:  Houseboy, by Ferdinand Oyono (John Reed, trans.)
  • 3 Nov 2016:  The Good Life Elsewhere, by Vladimir Lorchenkov (Ross Ufberg, trans.)
  • 12 Oct 2016:  Tales of the Hasidim:  The Early Masters, by Martin Buber (Olga Marx, trans.)
  • 1 Oct 2016:  Hit Makers, by Derek Thompson
  • 25 Sep 2016:  The Fireman, by Joe Hill
  • 19 Sep 2016:  Ghosts, by Raina Telgemeier
  • 3 Sep 2016:  The Queue, by Basma Abdel Aziz (Elizabeth Jaquette, trans.)
  • 11 Aug 2016:  City of Mirrors, by Justin Cronin
  • 26 Jul 2016:  Why I Killed My Best Friend, by Amanda Michalopoulou (Karen Emmerich, trans.)
  • 19 Jul 2016:  1Q84, by Haruki Murakami (Philip Gabriel and Jay Rubin, trans.)
  • 10 Jul 2016:  The Story of My Teeth, by Valeria Luiselli (Christina MacSweeney, trans.)
  • 1 Jul 2016:  So You Don’t Get Lost In The Neighborhood, by Patrick Modiano (Euan Cameron, trans.)
  • 13 May 2016:  Weapons of Math Destruction, by Cathy O’Neil
  • 2 May 2016:  Sh*tty Mom for All Seasons, by Erin Clune
  • 20 Apr 2016:  There’s Nothing I Can Do When I Think of You Late at Night, by Cao Naiqian (John Balcom, trans.)
  • 1 Apr 2016:  The Story of the Lost Child, by Elena Ferrante (Ann Goldstein, trans.)
  • 25 Feb 2016:  Those Who Leave and Those Who Stay, by Elena Ferrante (Ann Goldstein, trans.)
  • 10 Feb 2016:  Voices from Chernobyl, by Svetlana Alexievich (Keith Gessen, trans.)
  • 1 Feb 2016:  The Story of a New Name, by Elena Ferrante (Ann Goldstein, trans.)
  • 9 Jan 2016:  Amy and Laura, by Marilyn Sachs
  • 7 Jan 2016:  My Brilliant Friend, by Elena Ferrante (Ann Goldstein, trans.)

Note that I’m behind on these posts:  I covered the 2013 booklist about a year ago,  but still have to do 2015 (the year of reading mostly women) and 2014.  I’ll get to it.

20 translated books, 9 books in English.  One thing to note is that I read few books this year; I think reading in translation is just a little slower for me.

The languages:

  • 5 Italian (all Ferrante)
  • 3 French (two from France, one from Cameroon)
  • 3 Russian (but no Russian authors!  Lorchenkov is Moldovan, Alexievich is Belarussian.)
  • 2 Chinese
  • 2 German
  • 1 Japanese, 1 Arabic, 1 Greek, 1 Hindi, 1 Spanish.

Overall thoughts:  My plan, I guess, was to expand my horizons.  Did I?  I’m not sure I found these books to be as different from my usual reading as I expected.  Maybe because when American and British writers translate foreign books they somehow press them into the mold of the American and British novel I’m so at ease with?  Or because the novel is fundamentally a cosmopolitan form that works roughly the same way in different national traditions?

The one exception was There’s Nothing I Can Do When I Think of You Late at Night, a kind of Chinese Winesburg,Ohio:  very short, linked stories all set in a remote and desperately impoverished village.  It’s sort of incantatory, phrases repeated several times, in a way that really feels alien to the prose fiction tradition I know.  Niqian wasn’t trained as a writer; apparently he was a detective who started writing as a bet.  Here’s a review with some excerpts.

Best of the year:  No way to choose between Ferrante and Alexievich.  They are too different.  Also the same, of course, in that they always come back to women and the men from whom they expect little and get even less.  And the men from whom they expect something bad and get something even worse.

The books are oral history, interviews collected and transcribed into something like an epic.  Here’s a young woman in Belarus, released from prison after being arrested in a demonstration, telling her story in Secondhand Time:

Do I still like the village?  People here live the same way year in and year out.  They dig for potatoes in their vegetable patches, crawl around on their knees.  Make moonshine.  You won’t find a sngle sober man after dark, they all drink every single day.  They vote for Lukashenko and mourn the Soviet Union.  The undefeatable Soviet Army.  On the bus, one of our neighbors sat down next to me.  He was drunk.  He talked about politics:  “I would beat every moron democrat’s face in myself if I could.  They let you off easy.  I swear to God!  All of them ought to be shot.  America is behind all this, they’re paying for it … Hillary Clinton … but we’re a strong people.  We lived through perestroika, and we’ll make it through another revolution.  One wise man told me that the kikes are the ones behind it.”  The whole bus supported him.  “Things wouldn’t be any worse than they are now.  All you see on TV is bombings and shootings everywhere.”

The same woman, on her time in jail:

I learned that happiness can come from something as small as a bit of sugar or a piece of soap.  In a cell intended for five people — thirty-two square meters — there were seventeen of us.  You had to learn how to fit your entire life into two square meters.  It was especially hard at night, there was no air to breathe, it was stifling.  We wouldn’t get to sleep for a long time.  We stayed up talking.  The first few days, we discussed politics, but after that, we only ever talked about love.

Other Notes:  1Q84 was my first Murakami.  A fascinating example of a book that in many ways I view as  objectively poorly written but which I found captivating, even though it was 1000 pages long.  So maybe this, like Cao, is another book doing something with prose which I’m not used to and which I can’t completely understand.  Twenty-Four Hours in the Life of a Woman was compelling melodrama.  Tales of the Hasidim helped me remember that my idea of what “Jewish culture” means (intellectual, verbal, rule-governed, repressed)  is only one small part of our tradition, and not necessarily the biggest one.  The Lorchenkov was blackly funny.  The Aziz and the Michalopoulou were dull, though this could have been the translator’s fault.  The Civil Servant’s Notebook is a multivocal roman a clef (really multivocal; some of the chapters are narrated by desk furniture) about municipal corruption in China; it was apparently a huge bestseller there and has touched off an entire popular genre of “officialdom literature.”  Maybe we should have that here!

Worst of the year:  Easy, City of Mirrors.  I just dumped a huge ball of words on this terrible book so I went ahead and broke it out as a separate post so as not to dominate my nice year of translations.

 


Jordan EllenbergCity of Mirrors

Remember how much I liked the first book in this series?  It wasn’t perfect, but I admired the idea of depicting the destruction of a world that’s already kind of ruined though the people in the world don’t fully realize it.  (See also:  Station Eleven.) I was going to write a long post about how lousy this book was but didn’t get around to it and now I’ve mercifully forgotten most of the worst parts.  Still, I did save a lot of highlights of terrible sentences to my Kindle so here are some.

“such was the bittersweet beauty of life”

“Here, tacked to the neutral plaster walls, are the pennants of sports teams and the conundrumous M.C. Escher etching of hands drawing each other and, opposite the sagging single bed, the era-appropriate poster of the erect-nippled Sports Illustrated swimsuit model, beneath whose lubricious limbs and come-hither gaze and barely concealed pudenda the boy has furiously masturbated night after adolescent night.”

“I’d known that Lucessi had a younger sister; he had failed to mention that she was a bona fide Mediterranean goddess, quite possibly the most beautiful girl I’d ever laid eyes on — regally tall, with lustrous black hair, a complexion so creamy I wanted to drink it, and a habit of traipsing into a room wearing nothing more than a slip…. striding through the house in tall riding boots and clanking spurs and tight breeches, a costume no less powerful than the slip in its ability to send the blood dumping to my loins.”

“Though I knew I had done well, I was still astonished to see my first-semester report with its barricade of A’s”

“Between these carnal escapades — Carmen and I would often race back to her room between classes for an hour of furious copulation — and my voluminous classwork and, of course, my hours at the library — time well spent replenishing myself for our next encounter — I saw less and less of Lucessi.”

“On a Saturday afternoon, escorted by my father, I entered this sacred masculine space.  The details were intoxicating.  The odors of tonic, leather, talc.  The combs lounging in their disinfecting aquamarine bath.  The hiss and crackle of AM radio, broadcasting manly contests upon green fields.  My father beside me.  I waited on a chair of cracked red vinyl. Men were being barbered, lathered, whisked.”

“Caleb had peeked at her journals a few times over the years, unable to resist this small crime; like her letters, her entries were wonderfully written.  While they sometimes expressed doubts or concern over various matters, generally they communicated an optimistic view of life.”

The combination of pretentiousness (“manly contests upon green fields”), cliche (“hiss and crackle”), thesaurus-wrangling (“barricade”,”traipsing”), and general vagueness (“various matters”) is really something special.  It is not just bad but BAD in the sense of Paul Fussell.

Oh yeah and also he’s obsessed with the word “possessed” where he means to say “had” or just express the idea in a defter way.

“Something about this place felt new and undiscovered; it possessed a feeling of sanctuary.”

“His flesh, a sickly yellow, possessed a damp, translucent appearance, like the inner layers of an onion.”

“His thoughts possessed a lazy, unmoored quality.”

“His limbs possessed a thin-boned delicacy”

So much more is wrong!  The way characters are often referred to as “the man” or “the woman,” the general concern with manliness throughout, but manliness of a very bent kind:  a weird you-know-how-it-is sympathy towards characters when they get so frustrated that they just have to hit / strangle / kill / sexually assault a woman?  Which culminates in the big action set piece at the end — so of course the big giant ship they have to escape on has been tiresomely personified as a woman,  Michael’s actual one true love for 400 pages — then the big moment comes and the ship can’t consummate, all seems lost, until Michael solves the problem by calling the ship a bitch and hitting it with wrench, at which point it immediately settles down and behaves.  Then the end is an epilogue set 1000 years in the future where a tenured professor (yes, there’s still tenure in 1000 years) gets lucky with a young journalist who’s captivated by his hidden depth and middle-aged loneliness.

 

 


John BaezInformation Processing in Chemical Networks

There’s a workshop this summer:

• Dynamics, Thermodynamics and Information Processing in Chemical Networks, 13-16 June 2017, Complex Systems and Statistical Mechanics Group, University of Luxembourg. Organized by Massimiliano Esposito and Matteo Polettini.

They write, “The idea of the workshop is to bring in contact a small number of high-profile research groups working at the frontier between physics and biochemistry, with particular emphasis on the role of Chemical Networks.”

Some invited speakers include Vassily Hatzimanikatis, John Baez, Christoff Flamm, Hong Qian, Joshua D. Rabinowitz, Luca Cardelli, Erik Winfree, David Soloveichik, Stefan Schuster, David Fell and Arren Bar-Even. There will also be a session of shorter seminars by researchers from the local institutions such as Luxembourg Center for System Biomedicine. I believe attendance is by invitation only, so I’ll endeavor to make some of the ideas presented available here at this blog.

Some of the people involved

I’m looking forward to this, in part because there will be a mix of speakers I’ve met, speakers I know but haven’t met, and speakers I don’t know yet. I feel like reminiscing a bit, and I hope you’ll forgive me these reminiscences, since if you try the links you’ll get an introduction to the interface between computation and chemical reaction networks.

In part 25 of the network theory series here, I imagined an arbitrary chemical reaction network and said:

We could try to use these reactions to build a ‘chemical computer’. But how powerful can such a computer be? I don’t know the answer.

Luca Cardelli answered my question in part 26. This was just my first introduction to the wonderful world of chemical computing. Erik Winfree has a DNA and Natural Algorithms Group at Caltech, practically next door to Riverside, and the people there do a lot of great work on this subject. David Soloveichik, now at U. T. Austin, is an alumnus of this group.

In 2014 I met all three of these folks, and many other cool people working on these theme, at a workshop I tried to summarize here:

Programming with chemical reaction networks, Azimuth, 23 March 2014.

The computational power of chemical reaction networks, 10 June 2014.

Chemical reaction network talks, 26 June 2014.

I met Matteo Polettini about a year later, at a really big workshop on chemical reaction networks run by Elisenda Feliu and Carsten Wiuf:

Trends in reaction network theory (part 1), Azimuth, 27 January 2015.

Trends in reaction network theory (part 2), Azimuth, 1 July 2015.

Polettini has his own blog, very much worth visiting. For example, you can see his view of the same workshop here:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.

Finally, I met Massimiliano Esposito and Christoph Flamm recently at the Santa Fe Institute, at a workshop summarized here:

Information processing and biology, Azimuth, 7 November 2016.

So, I’ve gradually become educated in this area, and I hope that by June I’ll be ready to say something interesting about the semantics of chemical reaction networks. Blake Pollard and I are writing a paper about this now.


Tommaso DorigoAnomaly! At 35% Discount For Ten More Days

I thought it would be good to let you readers of this column know that in case you wish to order the book "Anomaly! Collider Physics and the Quest for New Phenomena at Fermilab" (or any other title published by World Scientific, for that matter) you have 10 more days to benefit of a 35% discount off the cover price. Just visit the World Scientific site of the book and use the discount code WS16XMAS35).

read more

Dave BaconSeattle for QIPers

QIP 2017 is coming to Seattle, hosted by the QuArC group at Microsoft, January 16-20 (with tutorials on the 14th and 15th). If you have some spare moments, maybe you arrive early, or maybe you are planning for the afternoon off, here are some ideas for things to do around the wonderful city I call home.

Be a Tourist!

  • Take a trip up to the Seattle Center (approximately 1 mile walk from Hotel).  There you can take a ride to top of the Space Needle ($22), which has some great views when it is sunny (ha!).  Music or Star Trek fan?  Check out Paul Allen’s collection of toys and memorabilia Museum of Pop Culture ($30), which has two very geeky exhibits right now, Star Trek and Indie Game Revolution.  Or if you are secure in your ability to not knock over stuff worth more than it’s weight in gold, check out the Chihuly Garden and Glass ($22, combine with a trip to Space Needle for $36).  Kids and family in tow?  Can’t go wrong with the Pacific Science Center ($27.75 adults, $11.75 kids) and the Seattle Children’s Museum ($10.50).
  • Visit Pike’s Place Market (about 0.5 mile walk from Hotel).  See them toss fish!  Visit the original Starbucks (sssshhh it was actually the second).  Like your politics off the chart? Check out Left Bank Books which has a seriously eclectic collection of books.  While you’re at it, if you’re playing tourist, you might as well walk on down to the waterfront where you can take a ride on the Seattle Great Wheel ($13) or check out the Aquarium ($50 ouch) (we had a party there a few years back, yes we ate Sushi in front of the octopus.)
  • Architect buff on the cheap?  Check out the Seattle Central Library (a little over a half mile from Hotel).  Sculpture buff on the cheap?  Walk around the Olympic Sculpture Park (little over a mile from the Hotel).  These are in completely different directions from the Hotel.
  • Museums?  Seattle Art Museum has a nice collection ($25) but my favorite these days is the Museum of History and Industry (Little over 1 mile walk, $20).  The MoHaI is located in south Lake Union, a location that has been transformed dramatically in the last few years since Amazon relocated to the area.  Count the number of cranes!
  • So it turns out the Seattle you see today was built over the top of the Seattle that used to be, and, while I’ve never done it, everyone I know who has done it, loves the Seattle Underground Tour.  Note that if you combine this tour with reading about earthquakes in the PNW you might give yourself some anxiety issues.  Seattle is in the middle of boring a long tunnel under it’s downtown to replace the gigantic monstrosity of the viaduct, sadly I don’t think there are any tours of the tunnel boring machine, Big Bertha.

Be a Geek!

  • Ada’s Technical Books is in the Capital Hill Neighborhood (bus or Lyft).  It’s not as crazy as some university town bookstore, but has a good collection of non-standard science and tech books.
  • Elliot Bay Bookstore again in Capital Hill is no Powell’s but it’s still rather good.
  • Fantagraphics bookstore and gallery.  You’ll know if you want to go to this if you recognize the name.

See a Show!

Get Out and About!

  • We’ve a ton of snow right now.  Snoqualmie is closest, great for beginners or if you’re just craving a quick ski or board.  For the more serious, Baker, Crystal, and Stevens Pass are all recommended.  I like Crystal a bit more, on clear days the view of Mt. Rainier is spectacular.
  • Take a ferry over to Bainbridge Island.  This is one of my top recommendations in the summer, but even in the winter it’s a nice trip.  (Other summer recommendation is to rent a Kayak and kayak around Lake Union, but it’s too cold to do that this time of year.)
  • If you’re up for a nice stroll, head over to Discovery Park or take a walk on the Alki beach in West Seattle (both require a ride to get there from Hotel, though you could walk down and take the water taxi on weekdays.)  Closer by to the Hotel, head over to Myrtle Edwards Park.

Neighborhoods

  • Seattle is a city of neighborhoods, each of which, believes that they have their own style!  Each of these except Belltown or Downtown are a bus, cab, or rideshare away.  Really there is too much to cover here, but here are a few short notes:
    • Belltown: This is the neighborhood just north of downtown where the Hotel is located.  Used to be sketchy but now has lots of luxury condos.  Shorty’s is a dive with pinball and hot dogs.  People seem to love Tilikum Place Cafe though I have not been there.  If you want a traditional expensive steakhouse, El Gaucho is great, though I think the Metropolitan Grill in downtown is better (both pricey!)  Since this is a quantum conference, I would be remorse to not point out that Belltown is the site of Some Random Bar, which I believe has good crab nachos.  If you crave a sweet donut, Top Pot Donuts is literally just up the street from the hotel.
    • Fremont: Is still an eclectic neighborhood, though not quite as far out as it used to be.  It’s annual solstice parade is the only day it is legal to ride your bike nude in Seattle.   Tons of places to eat and drink here, I recommend Brouwers (great beer selection, frites), Revel (Korean fusion, no reservations), and Paseo (cuban sandwiches OMG delicious) but there are a ton more in the neighborhood.   Theo’s chocolate does factory tours and also supplies a great smell to the neighborhood (along with another smell from the nearby dispensaries!)  Also if you’re up this way you can see a huge troll under a bridge, a rocket ship, and a statue of Lenin (who sometimes gets dressed in drag).
    • Ballard: Originally a Scandinavian fishing community, these days it’s hip as Seattle hip gets.  Sunday year round farmer’s market.  When many people think of the Pacific Northwest they think of fish, but really I think where Seattle really shines is in shellfish.  The Walrus and the Carpenter is a great place to affirm this claim.
    • Capital Hill: East of downtown, Seattle’s most vibrant district.  Fancy restaurants: Altura, Poppy.
    • University District: Lots of cheap eats for UW students.  In the summer I recommend renting a kayak from Agua Verde, a Mexican restuarant/kayak rental joint
    • South Lake Union: Amazon land, totally transformed over the last few years. I’ve had good luck at re:public.  Shuffleboard at Brave Horse Tavern.

Morning Run

I’d probably head over to the Sculpture park and run up Myrtle Edwards Park: here is a mapmyrun route.

Seattle

Enjoy Seattle, it’s a fun town!  I recommend, generally, shellfish, thai food, and coffee.  Also you can play the fun people guessing game: “software engineer or not” (advanced players can score points for Amazon or Microsoft sub-genres).  Also: if you don’t want to look like a tourist, leave the umbrella at home.  You know it rains more every year in New York city, right?

Jordan EllenbergPrime subset sums

Efrat Bank‘s interesting number theory seminar here before break was about sums of arithmetic functions on short intervals in function fields.  As I was saying when I blogged about Hast and Matei’s paper, a short interval in F_q[t] means:  the set of monic degree-n polynomials P such that

deg(P-P_0) < h

for some monic degree-n P_0 and some small h.  Bank sets this up even more generally, defining an interval in the space V of global sections of a line bundle on an arbitrary curve over F_q.  In Bank’s case, by contrast with the number field case, an interval is an affine linear subspace of some ambient vector space of forms.  This leads one to wonder:  what’s special about these specific affine spaces?  What about general spaces?

And then one wonders:  well, what classical question over Z does this correspond to?  So here it is:  except I’m not sure this is a classical question, though it sort of seems like it must be.

Question:  Let c > 1 be a constant.  Let A be a set of integers with |A| = n and max(A) < c^n.  Let S be the (multi)set of sums of subsets of A, so |S| = 2^n.  What can we say about the number of primes in S?  (Update:  as Terry points out in comments, I need some kind of coprimality assumption; at the very least we should ask that there’s no prime factor common to everything in A.)

I’d like to say that S is kind of a “generalized interval” — if A is the first n powers of 2, it is literally an interval.  One can also ask about other arithmetic functions:  how big can the average of Mobius be over S, for instance?  Note that the condition on max(S) is important:   if you let S get as big as you want, you can make S have no primes or you can make S be half prime (thanks to Ben Green for pointing this out to me.)  The condition on max(S) can be thought of as analogous to requiring that an interval containing N has size at least some fixed power of N, a good idea if you want to average arithmetic functions.

Anyway:  is anything known about this?  I can’t figure out how to search for it.


January 04, 2017

Jordan EllenbergFrance

Back from France!  Just there for 2 1/2 days on the way back from Israel.

  1.  Cheeses eaten:  Bethmale, Camembert, unidentified Basque sheep’s milk, Chabechou, Crottin de Chavignol, 28-month aged Comté, Vieux Cantal, Brie de Nangis.  The Brie and the Chabechou (both from La Fermette) were the highlights.
  2.  On the other hand, Berthillon ice cream not as amazing as I remembered — possibly because American ice cream has gotten a lot better since 2004, the last time I was in Paris?
  3. The Louvre is by far the most difficult major world museum to navigate.  Why, for instance, the system where the rooms have numbers but the room numbers aren’t on the map?
  4. I wonder museums with long entry lines have considered opening a separate, presumably shorter line for people willing to pay double (3x, 5x?) the usual entry fee.
  5. In the most Thomas Friedman moment of my life, an Algerian cab driver heard my accent and immediately began telling me how much he loved Trump.  If you want to know the Algerian cab driver conventional wisdom on this topic, it’s that Trump is a businessman and will “calmer” once he becomes President.  I am doubtful.  Tune in later to see who was right, the Algerian cab driver or his skeptical passenger.
  6. AB really loved the Rodin sculptures in the Musee d’Orsay.  I think because her height is such that she’s quite close to their feet.  Rodin sculpted a hell of a foot.
  7. I learned from Leila Schneps that the masters of the Académie Française have declared that “oignon” is to be spelled “ognon” from now on!  I can’t adjust.

Jordan EllenbergIsrael

The main part of our trip was to Jerusalem, where I met my new nephew and organized a workshop about new developments in the polynomial method.

  • The vote on UN resolution 2334 was held while I was there but nobody I talked to seemed really focused on it.  One Israeli businessman told me “the Arabs won’t destroy Israel, Netanyahu won’t destroy Israel, the only thing that can destroy Israel is the residue of Bolshevism.”  Then he told me things about taxes that curled my hair.  Apparently if you do work for person X, and bill them for 100,000 shekels, you owe taxes on the 100,000 shekels, whether or not person X pays you!   They go bankrupt or just stiff you, you’re screwed.  If you go to the tax agency to say “how can I pay taxes on income I didn’t get” they say “the problem is between you and person X, go sue them if you need that money.”
  • Tomer Schlank told me my accent was really good!  I don’t speak Hebrew, by the way.  But I’m actually very good at imitating accents, which is a problem, because when I carefully think about what I’m going to say (in Spanish, French, German, Hebrew, whatever) and then say it, I can sometimes fool the person I’m talking to into thinking I’m going to understand their response.  Fortunately, I’m also very good at the blank look.
  • My kids really wanted to go to the science museum and I was reluctant — there are science museums all over the world! — but I relented and actually it was kind of great.  Culturally interesting, first of all, because the place was completely packed with Orthodox families; it was Hanukkah, a rare time when kids are off school but it’s not chag, which makes it massive go time for kid-oriented activities in Jerusalem.  I was happy for my kids to experience that feeling of immersion in a crowd that was on the one hand Jewish but on the other hand quite culturally alien, in a way that secular cosmopolitan schwarma Israel really isn’t.  As for the museum itself:   “Games in Light and Shadow” was a really charming exhibit, sort of a cross between interactive science and a walk-through art space a la Meow Wolf.
  • Sadly, we didn’t make it back to Cafe Itamar this time.  But we did return to Morduch.  I know it’s a tourist destination but in this case the tourists have it right, the Iraqi Jewish food there is incredible.  Get the kubbe soup, get the hummus basar.  This would be the best food I ate in Israel were it not for my Mizrachi machetunim in Afula.  So it might be the best food you can eat in Israel.

n-Category Café Globular for Higher-Dimensional Knottings (Part 3)

guest post by Scott Carter

This is my 3rd post a Jamie Vicary’s program Globular. And here I want to give you an exercise in manipulating a sphere in 4-dimensional space until it is demonstrably unknotted. But first I’ll need to remind you a lot about knotting phenomena. By the way, I lied. In the previous post, I said that the next one would be about braiding. I will write the surface braid post soon, but first I want to give you a fun exercise.

This post, then, will describe a 2-sphere embedded in 4-space, and we’ll learn to try and unknot it.

Loops of string can be knotted in 3-dimensional space. For example, go out to your tool shed and get out your orange heavy-duty 25 foot long extension cord. Plug the male end into the female and tape them together so that the plug cannot become undone. I would wager that as you try to unravel this on your living room floor, or your front lawn, you’ll discover that it is knotted.

Rather than using a physical model such as an extension cord, we can also create knots using the classical knot template of which I wrote in the first post. There you create knots by beginning with as many cups as you like in whatever nesting pattern that you like. For example:

And yes, these nestings are associated to elements in the Temperley-Lieb algebra. Then you can click and swipe left or right at the top endpoints and thereby entangle strings as you choose:

Close the result with a collection of caps, and a link results:

It is possible that the resulting link can be disentangled. To play with Globular, keep your link in the workspace, click on the identity menu item on the right, and then start trying to apply Reidemeister moves to it. For example, when I finished simplifying my diagram, I got the trefoil. I didn’t expect this!

If you want to see how I got the trefoil, you can look at my sequence of isotopy moves within globular.

By clicking the identity button on the right, you preserved the moves you used. The graphic immediately above indicates an annulus embedded in 3-space times an interval [0,1][0,1]. At the bottom is the knot that I drew, at the top is the result of the isotopy.

You can go into that isotopy, click the identity button, and modify it further to find a more efficient path between the knots!

Just as circles can be linked and knotted in 3-space, surfaces can be knotted and linked in 4-space. The knotting of higher dimensional spheres was observed by Emil Artin in a 1925 paper. Most progress about higher-dimensional knots occurred in the era circa 1960 through 1975. At that time, new algebraic topological techniques, particularly homological studies of covering spaces, occurred. Some authors, Yajima in particular, also initiated a diagrammatic theory. The diagram of a knotted surface is its projection from 4-space into 3-space with crossing information indicated. I like to think of the diagram as representing the knotted surface in a thin neighborhood of 3-space. The bits of surface that are indicated by breaks protrude into 4-space in that thin neighborhood. Still this imagery does not help manipulate the surface. By analogy, if you think of a classical knot as being confined to a thin sheet of space, then you’ll feel constrained in pulling the under-crossing arc.

As sighted humans, we perceive only surface. We posit solid. So as I sit at my desk, I see its top and I presume that it is made of a thick wood. The drawer in front of me defines a cavity in which paper clips, rubber bands, and old papers sit. But I can’t see through this. I only see the front of the drawer. When I look at the diagram of a knotted surface, I create visual tropes to help me understand. How many layers are there behind the visible layer? Where does the surface fold? Where does it interweave? Within the globular view (project 2) of a knotted surface, we see (1) the face of the surface that lies closest to us, and (2) the collection of double curves, triple points, folds, and cusps that induce the knotting. Globular is new — only a year old. So its depiction of these things is not as elegant as it might be, but all the information is there. Mouse-overs let us know the type and the levels of all the singular sets. Cusps and optimal points of double curves (these are double curves in the projection of the surface into 3-space not double curves in 4-space) have the same shape. They should have different colors. Similarly birth, deaths, saddles, and crotches will all be cup or cap like. Hover the mouse to the critical point, and you’ll see what it is.

Here:

is the image of a sphere in 4-space that looks like it might be knotted. But in fact it is not. This is the image from a worksheet that I created specifically for the energetic readers of this blog. In the worksheet, I created a sphere embedded in 4-space that is constructed as Zeeman’s 1-twist spin of the figure-8 knot (4sub1) in the tables. At least I think I did! Zeeman’s general twist spinning theorem says that the n-twist-spin of a classical knot is fibered with its fibre being the (punctured) n-fold branched cover of the 3-sphere branched along the given knot. When n=1, this branched cover is the 3-ball, and so the embedded sphere bounds a ball, and therefore is unknotted.

The worksheet that I created here is a quebra-cabeça — a mind-bending puzzle for the reader. Can you use globular to unknot this embedded sphere? By the way, I am not 100 percent sure that I constructed this example correctly ;-) But here is my advise for unknotting it. There are two critical points, one saddle and one crotch, that need to have their heights interchanged. To interchange these heights, add two swallow tails: a left (up or down) swallow tail (L ST (up or down)) on the interior red fold line, and a right (down or up) (R ST (down or up)) on the interior green fold. These folds are mouse-over named cap and cup, respectively. The swallow tails allow you to turn the surface on its side. Then pull the stuff (type I, ysp, and psy) that lie along these folds into the swallowtail regions. meanwhile interchange the heights of the crotch and saddle. When you get done with that, I’ll give another hint, and I may have done these operations myself.

n-Category Café Field Notes on the Behaviour of a Large Assemblage of Ecologists

I’ve just come back from the annual conference of the British Ecological Society in Liverpool. For several years I’ve had a side-interest in ecology, but I’d never spent time with a really large group of ecologists before, and it taught me some things. Here goes:

  1. Size and scale. Michael Reed memorably observed that the American Mathematical Society is about the same size as the American Society for Nephrology, “and that’s just the kidney”. Simply put: not many people care about mathematics.

    The British Ecological Society (BES) meeting had 1200 participants, which is about ten times bigger than the annual international category theory meeting, and still only a fraction of the size of the conference run by the Ecological Society of America. You may reply that the US Joint Mathematics Meetings attract about 7000 participants; but as Reed pointed out (under the heading “Most of Science is Biology”), the Society for Neuroscience gets about 30,000. Even at the BES meeting in our small country, there were nearly 600 talks, 70 special sessions, and 220 posters. In the parallel sessions, you had a choice of 12 talks to go to at any given moment in time.

  2. Concision. Almost all talks were 12 minutes, with 3 minutes for questions. You cannot, of course, say much in that time.

    With so many people attending and wanting to speak, it’s understandable that the culture has evolved this way. And I have to say, it’s very nice that if you choose to attend a talk and swiftly discover that you chose badly, you’ve only lost 15 minutes.

    But there are many critiques of enforced brevity, including from some very distinguished academics. It’s traditionally held that the most prestigious journals in all of science are Nature and Science, and in both cases the standard length of an article is only about three pages. The style of such papers is ludicrously condensed, and from my outsider’s point of view I gather that there’s something of a backlash against Nature and Science, with less constipated publications gaining ground in people’s mental ranking systems. When science is condensed too much, it takes on the character of a sales pitch.

    This is part of a wider phenomenon of destructive competition for attention. For instance, almost all interviews on TV news programmes are under ten minutes, and most are under five, with much of that taken up by the interviewer talking. The very design favours sloganeering and excludes all points that are too novel or controversial to explain in a couple of sentences. (The link is to a video of Noam Chomsky, who makes this point very effectively.) Not all arguments can be expressed to a general audience in a few minutes, as every mathematician knows.

  3. The pleasure of introductions. Many ecologists study one particular natural system, and often the first few minutes of their talks are a delight. You learn something new and amazing about fungi or beavers or the weird relationships between beetles and ants. Did you know that orangutans spend 80% of the day resting in their nests? Or that if you give a young orangutan some branches, he or she will instinctively start to weave them together in a nest-like fashion, as an innate urge that exists whether or not they’ve been taught how to do it? I didn’t.

    Orangutan resting in nest

  4. Interdisciplinarity. I’ve written before about the amazing interdisciplinarity of biologists. It seems to be ingrained in the intellectual culture that you need people who know stuff you don’t know, obviously! And that culture just isn’t present within mathematics, at least not to anything like the same extent.

    For instance, this afternoon I went to a talk about the diversity of microbiomes. The speaker pointed out that for what she was doing, you needed expertise in biology, chemistry, and informatics. She was unusual in actually spelling it out and spending time talking about it. Most of the time, speakers moved seamlessly from ecology to statistics to computation (typically involving processing of large amounts of DNA sequence data), without making a big deal of it.

    But there’s a byproduct of interdisciplinarity that troubles my mathematical soul:

  5. The off-the-shelf culture. Some of the speakers bowled me over with their energy, vision, tenacity, and positive outlook. But no one’s superhuman, so it’s inevitable that if your work involves serious aspects of multiple disciplines, you’re probably not going to look into everything profoundly. Or more bluntly: if you need some technique from subject X and you know nothing about subject X, you’re probably just going to use whatever technique everybody else uses.

    The ultimate reason why I ended up at this conference is that I’m interested in the quantification of biological diversity. So, much of the time I chose to go to talks that had the word “diversity” in the title, just to see what measure of diversity was used by actual practising ecologists.

    It wasn’t very surprising that almost all the time, as far as I could tell, there was no apparent examination of what the measures actually measured. They simply used whatever measure was predominant in the field.

    Now, I need to temper that with the reminder that the talks are ultra-short, with no time for subtleties. But still, when I asked one speaker why he chose the measure that he chose, the answer was that it’s simply what everyone else uses. And I can’t really point a finger of blame. He wasn’t a mathematician, any more than I’m an ecologist.

  6. The lack of theory. If this conference was representative of ecology, the large majority of ecologists study some specific system. By “system” I mean something like European hedgerow ecology, or Andean fungal ecology, or the impact of heatwaves on certain types of seaweed.

    This is, let me be clear, not a bad thing. Orders of magnitude more people care about seaweed than nn-categories. But still, I was surprised by the sheer niche-ness of general theory in the context of ecology as a whole. A group of us are working on a system of diversity measures that are general in a mathematician’s sense; they effortlessly take in such examples as human demography, tropical forestry, epidemiology, and resistance to antibiotics. This didn’t seem like that big a deal to me previously — it’s just the bog-standard generality of mathematics. But after this week, I can see that from many ecologists’ eyes, it may seem insanely general.

    Actually, the most big-picture talks I saw were very unmathematical. They were, in fact, about policy and the future of humanity. I’m not being flippant:

  7. Unabashed politics. Mathematics is about an idealized world of imagination. Ecology is about our one and only natural world — one that we happen to be altering at an absolutely unprecedented rate. Words like “Brexit” and “Trump” came up dozens of times in the conference talks, and not in a tittery jocular way. The real decisions of people with real political power will have real, irreversible effect in the real world.

    Once again, this brought home to me that mathematics is not like (the rest of) science.

    It’s not just that we don’t have labs or experiments or hypothesis testing (at least, not in the same way). It’s that we can do mathematics in complete isolation from the realities of the world that human beings have made.

    We don’t have to think about deforestation or international greenhouse gas treaties or even local fishery byelaws. We might worry about the applications of mathematics — parasitic investment banks or deadly weapons or governments surveilling and controlling their citizens — but we can actually do mathematics in lamb-like innocence.

    On the other hand, for large parts of ecology, the political reality is an integral consideration.

    I saw some excellent talks, especially from Georgina Mace and Hugh Possingham, on policy and influencing governments. Possingham was talking about saving Portugal-sized areas of Australia from industrial destruction. (His advice for scientists engaging with governments: “Turn up. Have purpose. Maintain autonomy.”) Mace spoke on what are quite possibly the biggest threats to the entire planet: climate change, floods and heatwaves, population growth, and fragmentation and loss of habitats.

    It’s inspiring to see senior scientists being unafraid to repeat basic truths to those in power, to gather the available evidence and make broad estimates with much less than 100% of the data that one might wish for, in order to push changes that will actually improve human and other animal lives.

BackreactionThe Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown. Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their

January 03, 2017

Noncommutative GeometryGamma functions and nonarchimedean analysis

Happy New Year! I view blog writing as a great opportunity to reach out to members of the mathematics community and especially the younger members; so in this sense blog writing is, for me, very similar to writing for Math Reviews. I have enjoyed doing both for many years (and many many years for MR!). Recently I wrote a review for MR on the paper ``Twisted characteristic p zeta functions''

n-Category Café Basic Category Theory Free Online

My textbook Basic Category Theory, published by Cambridge University Press, is now also available free as arXiv:1612.09375.

Cover of Basic Category Theory

As I wrote when I first announced the book:

  • It doesn’t assume much.
  • It sticks to the basics.
  • It’s short.

I can now add a new property:

  • It’s free.

And it’s not only free, it’s freely editable. The book’s released under a Creative Commons licence that allows you to edit and redistribute it, just as long as you state the authorship accurately, don’t use it for commercial purposes, and preserve the licence. Click the link for details.

Why might you want to edit it?

Well, maybe you want to use it to teach a category theory course, but none of your students have taken topology, so you’d rather remove all the topological examples. That’s easy to do. Or maybe you want to add some examples, or remove whole sections. Or it could just be that you can’t stand some of the notation, in which case all you need to do is change some macros. All easy.

Alternatively, perhaps you’re not planning to teach from it — you just want to read it, but you want to change the formatting so that it’s comfortable to read on your favourite device. Again, this is very easy to do.

Emily recently announced the dead-tree debut of her own category theory textbook, published by Dover. She did it the other way round from me: the online edition came first, then the paper version. (I also did it that way round for my first book.) But the deal I had with Cambridge was that they’d publish first, then I could put it on the arXiv under a Creative Commons licence 18 months later.

We’ve talked a lot on this blog about parasitic academic publishers, so I’d like to emphasize here what a positive contribution Cambridge University Press has made, and is continuing to make, to the academic community. CUP is a part of Cambridge University, and I think I’m right in saying that it’s not allowed to make a profit. (Correction: I was wrong. However, maximizing profits is not CUP’s principal aim.) It has led the way in allowing mathematics authors to post free versions of their books online. For instance, apart from my own two books, you quite likely know of Allen Hatcher’s very successful book Algebraic Topology, also published in paper form by CUP and, with their permission, available free online.

Since a few people have asked me privately for opinions on publishers, I’ll also say that working with CUP for this book was extremely smooth. The contract (including the arXiv release) was easily arranged, and the whole production process was about as low-stress as I can imagine it being. This wasn’t the case for my first book in 2003, also with CUP, which because of editing/production problems was a nightmare of stress. That made me very reluctant to go with CUP again, but I’m really glad that I chose to do so.

The low stress this time was partly because of one key request that I made at the beginning: we agreed that I would not share the Latex files with anyone at CUP. Thus, all I ever sent CUP was the PDF, and no one except me had ever seen my Latex source until the arXiv release just now. What that meant was that all changes, down to the comma, had to go through me. For example, the way the proofreading worked was that the proofreader would send me corrections and suggestions and I’d implement them, rather than him making changes first and me approving or reverting them second.

For anyone with a perfectionist/pedantic/… streak like mine (insert your own word), that’s an enormous stress relief. I’d recommend it to any authors of a similar personality. Again, it’s to CUP’s credit that they agreed to doing things this way — I’m not sure that all publishers would.

So the book’s now free to all. If you make heavy use of it and can afford to do so, I hope you’ll reciprocate the support that CUP has shown the mathematical community by buying a copy. But in any case, I hope you enjoy it.

BackreactionHow to use an "argument from authority"

I spent the holidays playing with the video animation software. As a side-effect, I produced this little video. If you'd rather read than listen, here's the complete voiceover: It has become a popular defense of science deniers to yell “argument from authority” when someone quotes an experts’ opinion. Unfortunately, the argument from authority is often used incorrectly. What is an “argument

January 02, 2017

BackreactionThe 2017 Edge Annual Question: Which Scientific Term or Concept Ought To Be More Widely Known?

My first thought when I heard the 2017 Edge Annual Question was “Wasn’t that last year's question?” It wasn’t. But it’s almost identical to the 2011 question, “What scientific concept would improve everybody’s cognitive toolkit.” That’s ok, I guess, the internet has an estimated memory of 2 days, so after 5 years it’s reasonable to assume nobody will remember their improved toolkit. After that

Tommaso DorigoA Visit To Israel

I am spending a week in Israel to visit three physics institutes for colloquia and seminars: the Tel Aviv University (where I gave a colloquium yesterday), the Haifa Technion (where I am giving a seminar today), and the Weizmann institute in Rehovot (where I'll speak next Wednesday).

read more

Geraint F. LewisBlog rebirth - a plan for 2017

It is now the twilight zone between Christmas and New Year. 2016 has been a difficult and busy year, and my recreational physics and blogging has suffered. But it is time for a rebirth and I plan to get back to the writing about science and space here. But here's some things from 2016.

A Fortunate Universe: Life in a finely tuned cosmos was published. This has sucked up a huge amount of time and mental activity, and that continues. I will blog about the entire writing and publishing process at some point in the future, but it really is quite a complex process with many mine-fields to navigate. But it is done, and am planning to write more in the future.
We also made a video to advertise the book!

I've done a lot of writing in other places, including Cosmos magazine on "A universe made for me? Physics, fine-tuning and life", and commentary in New Scientist and several articles in The Conversation including

Peering into the future: does science require predictions?

and

The cosmic crime-scene hunt for clues on how galaxies are formed

And one of my articles from last year, We are lucky to live in a universe made for us was selected for inclusion in The Best Australian Science Writing 2016,
There has been a whole bunch of science papers as well, but I will write about those when the blog is up and running at full speed :)

December 31, 2016

n-Category Café NSA Axes Math Grants

Old news, but interesting: the US National Security Agency (NSA) announced some months ago that it was suspending funding to its Mathematical Sciences Program. The announcement begins by phrasing it as a temporary suspension—

…[we] will be unable to fund any new proposals during FY2017 (i.e. Oct. 1, 2016–Sept. 30, 2017)

—but by the end, sounds resigned to a more permanent fate:

We thank the mathematics community and especially the American Mathematical Society for its interest and support over the years.

We’ve discussed this grant programme before on this blog.

The NSA is said to be the largest employer of mathematicians in the world, and has been under political pressure for obvious reasons over the last few years, so it’s interesting that it cut this programme. Its British equivalent, GCHQ, is doing the opposite, expanding its mathematics grants aggressively. But still, GCHQ consistently refuses to engage in any kind of adult, evidence-based discussion with the mathematical community on what the effect of its actions on society might actually be.

December 29, 2016

John BaezAzimuth Backup Project (Part 2)

I want to list some databases that are particularly worth backing up. But to do this, we need to know what’s already been backed up. That’s what this post is about.

Azimuth backups

Here is information as of now (21:45 GMT 20 December 2016). I won’t update this information. For up-to-date information see

Azimuth Backup Project: Issue Tracker.

For up-to-date information on the progress of each of individual databases listed below, click on my summary of what’s happening now.

Here are the databases that we’ve backed up:

• NASA GISTEMP website at http://data.giss.nasa.gov/gistemp/downloaded by Jan and uploaded to Sakari’s datarefuge server.

• NOAA Carbon Dioxide Information Analysis Center (CDIAC) data at ftp.ncdc.noaa.gov/pub/data/paleo/cdiac.ornl.gov-pub — downloaded by Jan and uploaded to Sakari’s datarefuge server.

• NOAA Carbon Tracker website at http://www.esrl.noaa.gov/psd/data/gridded/data.carbontracker.htmldownloaded by Jan, uploaded to Sakari’s datarefuge server.

These are still in progress, but I think we have our hands on the data:

• NOAA Precipitation Frequency Data at http://hdsc.nws.noaa.gov/hdsc/pfds/ and ftp://hdsc.nws.noaa.gov/pubdownloaded by Borislav, not yet uploaded to Sakari’s datarefuge server.

• NOAA Carbon Dioxide Information Analysis Center (CDIAC) website at http://cdiac.ornl.govdownloaded by Jan, uploaded to Sakari’s datarefuge server, but there’s evidence that the process was incomplete.

• NOAA website at https://www.ncdc.noaa.govdownloaded by Jan, who is now attempting to upload it to Sakari’s datarefuge server.

• NOAA National Centers for Environmental Information (NCEI) website at https://www.ncdc.noaa.govdownloaded by Jan, who is now attempting to upload it to Sakari’s datarefuge server, but there are problems.

• Ocean and Atmospheric Research data at ftp.oar.noaa.gov — downloaded by Jan, now attempting to upload it to Sakari’s datarefuge server.

• NOAA NCEP/NCAR Reanalysis ftp site at ftp.cdc.noaa.gov/Datasets/ncep.reanalysis/ — downloaded by Jan, now attempting to upload it to Sakari’s datarefuge server.

I think we’re getting these now, more or less:

• NOAA National Centers for Environmental Information (NCEI) ftp site at ftp://eclipse.ncdc.noaa.gov/pub/ — in the process of being downloaded by Jan, “Very large. May be challenging to manage with my facilities”.

• NASA Planetary Data System (PDS) data at https://pds.nasa.govin the process of being downloaded by Sakari.

• NOAA tides and currents products website at https://tidesandcurrents.noaa.gov/products.html, which includes the sea level trends data at https://tidesandcurrents.noaa.gov/sltrends/sltrends.htmlJan is downloading this.

• NOAA National Centers for Environmental Information (NCEI) satellite datasets website at https://www.ncdc.noaa.gov/data-access/satellite-data/satellite-data-access-datasetsJan is downloading this.

• NASA JASON3 sea level data at http://sealevel.jpl.nasa.gov/missions/jason3/Jan is downloading this.

• U.S. Forest Service Climate Change Atlas website at http://www.fs.fed.us/nrs/atlas/Jan is downloading this.

• NOAA Global Monitoring Division website at http://www.esrl.noaa.gov/gmd/dv/ftpdata.htmlJan is downloading this.

• NOAA Global Monitoring Division ftp data at aftp.cmdl.noaa.gov/ — Jan is downloading this.

• NOAA National Data Buoy Center website at http://www.ndbc.noaa.gov/Jan is downloading this.

• NASA-ESDIS Oak Ridge National Laboratory Distributed Active Archive (DAAC) on Biogeochemical Dynamics at https://daac.ornl.gov/get_data.shtmlJan is downloading this.

• NASA-ESDIS Oak Ridge National Laboratory Distributed Active Archive (DAAC) on Biogeochemical Dynamics website at https://daac.ornl.gov/Jan is downloading this.

Other backups

Other backups are listed at

The Climate Mirror Project, https://climate.daknob.net/.

This nicely provides the sizes of various backups, and other useful information. Some are ‘signed and verified’ with cryptographic keys, but I’m not sure exactly what that means, and the details matter.

About 90 databases are listed here, along with some size information and some information about whether people have already backed them up or are in process:

Gov. Climate Datasets (Archive). (Click on the tiny word “Datasets” at the bottom of the page!)


azimuth_logo


Tommaso DorigoINFN Gives 73 Permanent Positions To Young Researchers In Physics

Today I am actually quite proud of my research institute, the "Istituto Nazionale di Fisica Nucleare, INFN, which leads Italian research in fundamental physics. In fact a selection to hire 73 new researchers with permanent positions has reached its successful conclusion. Rather than giving you my personal opinions (very positive!) I think it is better to let speak the INFN president Fernando Ferroni, and the numbers themselves.

read more

Doug NatelsonSome optimism at the end of 2016

When the news is filled with bleak items, like:
it's easy to become pessimistic.   Bear in mind that modern communications plus the tendency for bad news to get attention plus the size of the population can really distort perception.  To put that another way, 56 million people die every year (!), but now you are able to hear about far more of them than ever before.  

Let me make a push for optimism, or at least try to put some things in perspective.  There are some reasons to be hopeful.  Specifically, look here, at a site called "Our World in Data", produced at Oxford University.  These folks use actual numbers to point out that this is actually, in many ways, the best time in human history to be alive:
  • The percentage of the world's population living in extreme poverty is at an all-time low (9.6%).
  • The percentage of the population that is literate is at an all-time high (85%), as is the overall global education level.
  • Child mortality is at an all-time low.
  • The percentage of people enjoying at least some political freedom is at an all-time high.
That may not be much comfort to, say, an unemployed coal miner in West Virginia, or an underemployed former factory worker in Missouri, but it's better than the alternative.   We face many challenges, and nothing is going to be easy or simple, but collectively we can do amazing things, like put more computing power in your hand than existed in all of human history before 1950, set up a world-spanning communications network, feed 7B people, detect colliding black holes billions of lightyears away by their ripples in spacetime, etc.  As long as we don't do really stupid things, like make nuclear threats over twitter based on idiots on the internet, we will get through this.   It may not seem like it all the time, but compared to the past we live in an age of wonders.

December 28, 2016

John BaezGive the Earth a Present: Help Us Save Climate Data

getz_ice_shelf

We’ve been busy backing up climate data before Trump becomes President. Now you can help too, with some money to pay for servers and storage space. Please give what you can at our Kickstarter campaign here:

Azimuth Climate Data Backup Project.

If we get $5000 by the end of January, we can save this data until we convince bigger organizations to take over. If we don’t get that much, we get nothing. That’s how Kickstarter works. Also, if you donate now, you won’t be billed until January 31st.

So, please help! It’s urgent.

I will make public how we spend this money. And if we get more than $5000, I’ll make sure it’s put to good use. There’s a lot of work we could do to make sure the data is authenticated, made easily accessible, and so on.

The idea

The safety of US government climate data is at risk. Trump plans to have climate change deniers running every agency concerned with climate change. So, scientists are rushing to back up the many climate databases held by US government agencies before he takes office.

We hope he won’t be rash enough to delete these precious records. But: better safe than sorry!

The Azimuth Climate Data Backup Project is part of this effort. So far our volunteers have backed up nearly 1 terabyte of climate data from NASA and other agencies. We’ll do a lot more! We just need some funds to pay for storage space and a server until larger institutions take over this task.

The team

Jan Galkowski is a statistician with a strong interest in climate science. He works at Akamai Technologies, a company responsible for serving at least 15% of all web traffic. He began downloading climate data on the 11th of December.

• Shortly thereafter John Baez, a mathematician and science blogger at U. C. Riverside, joined in to publicize the project. He’d already founded an organization called the Azimuth Project, which helps scientists and engineers cooperate on environmental issues.

• When Jan started running out of storage space, Scott Maxwell jumped in. He used to work for NASA—driving a Mars rover among other things—and now he works for Google. He set up a 10-terabyte account on Google Drive and started backing up data himself.

• A couple of days later Sakari Maaranen joined the team. He’s a systems architect at Ubisecure, a Finnish firm, with access to a high-bandwidth connection. He set up a server, he’s downloading lots of data, he showed us how to authenticate it with SHA-256 hashes, and he’s managing many other technical aspects of this project.

There are other people involved too. You can watch the nitty-gritty details of our progress here:

Azimuth Backup Project – Issue Tracker.

and you can learn more here:

Azimuth Climate Data Backup Project.


December 27, 2016

Mark Chu-CarrollOkonomilatkes!

I’m working on some type theory posts, but it’s been slow going.

In the meantime, it’s Chanukah time. Every year, my family makes me cook potato latkes for Chanukah. The problem with that is, I don’t particularly like potato latkes. This year, I came up with the idea of trying to tweak them into something that I’d actually enjoy eating. What I came up with is combining a latke with another kind of fried savory pancake that I absolutely love: the japanese Okonomiyaki. The result? Okonomilatkes.

Ingredients:

  • 1/2 head green cabbage, finely shredded.
  • 1 1/2 pounds potatoes
  • 1/2 cup flour
  • 1/2 cup water
  • 1 beaten egg
  • 1/2 pound crabstick cut into small pieces
  • Tonkatsu sauce (buy it at an asian grocery store in the japanese section. The traditional brand has a bulldog logo on the bottle.)
  • Katsubuoshi (shredded bonito)
  • Japanese mayonaise (sometimes called kewpie mayonaise. You can find it in squeeze bottles in any asian grocery. Don’t substitute American mayo – Japanese mayo is thinner, less oily, a bit tart, sweeter, and creamier. It’s really pretty different.)
  • 1 teaspoon salt
  • 1/2 teaspoon baking powder.

Instructions

  1. In a very hot pan, add about a tablespoon of oil, and when it’s nearly smoking, add the cabbage. Saute until the cabbage wilts and starts to brown. Remove from the heat, and set aside to cool.
  2. Using either the grater attachment of a food processor, or the coarse side of a box grater, shred the potatoes. (I leave the skins on, but if that bugs you, peel them first).
  3. Squeeze as much water as you can out of the shredded potatoes.
  4. Mix together the water, flour, baking powder, egg, and salt into a thin batter.
  5. Add the potatoes, cabbage, and crabstick to the batter, and stir together.
  6. Split this mixture into four portions.
  7. Heat a nonstick pan on medium high heat, add a generous amount of oil, and add one quarter of the batter. Let it cook until nicely browned, then flip, and cook the other side. On my stove, it takes 3-5 minutes per side. Add oil as needed while it’s cooking.
  8. Repeat with the other 3 portions
  9. To serve, put a pancake on a plate. Squeeze a bunch of stripes of mayonaise, then add a bunch of the tonkatsu sauce, and sprinkle with the katsubuoshi.

December 24, 2016

Steinn SigurðssonJólasveinar og Jólakettir

The origins and history of the Yule Lads with bonus Christmas Cat…

Even I did not know that peak Yule Lads was 82!
Criminy!

Steinn SigurðssonLast minute stocking stuffers for nörds

Ok, I confess, I was supposed to get these reviewed before the Holidays, but a Sequence of Unfortunate Events Intervened and I am only part way through these.

Anywho, if you need a last second pressie for random acquaintances so disposed, there are a couple of interesting science books out there:

  1. A Fortunate Universe: Life in a Finely Tunes Cosmos by Geraint Lewis and Luke Barnes, is a nice up to date book for the general (educated) public on modern physics and cosmology.
    If covers modern cosmology and some of the Big Questions of our times, in particular the issue of anthropomorphism how “fine tuned” our Universe is.

    Welshman finds QSO

    Welshman finds QSO

  2. Modern Prometheus: Editing the Human Genome with Crispr-Cas9 by James Kozubek is a personal history of the discovery of the CRISPR-CAS9 genes and their use, and a discussion of the implications and potential of the technology.
    It is not an easy book, it does not flow, the discussion is technical given the intended audience and the narrative digresses frequently with often convoluted discussion.
    But the topic is interesting and the coverage is comprehensive.

  3. Mapping the Heavens: The Radical Scientific Ideas That Reveal the Cosmos by Priyamvada Natarajan.
    Ok, I haven’t read this one, don’t have a copy.
    But, I’ve heard very good things about it.
    A big picture of current research in cosmology, aimed at the educated general reader, covering a range of topics but focusing on the search for dark matter, if what I am told is true.
    I’d like to read it, so I’m sure you ought to also.

Steinn SigurðssonAll Roads Lead to Rome

Roads to Rome

All roads really read to Rome

All roads really read to Rome

moovel lab makes funky maps,
go play

Richard EastherNew York State of Mind

It's not often an advertisement sums up a deep truth about the universe, but here's one that does.

One of the commonest questions about the Big Bang is "If the universe is expanding, is everything in it getting bigger? The solar system, the sun, the earth, our bodies and the atoms we are made of?"

The answer is no, anything that can hold itself together won't get any bigger as the Universe grows. For big things (like our Milky Way galaxy, or the Solar System within it) gravity provides the glue that stops them from stretching; for little things like rocks and people electrical forces between atoms hold them together. Only the space between galaxies (and clusters of galaxies) grows as the Universe expands which is just as Einstein's General Relativity predicts, and our most sensitive measurements confirm. (Phew)

But Manhattan Mini Storage nails it in eight words. Those New Yorkers, always in a hurry...

December 22, 2016

Terence TaoAMS open math notes

I just learned (from Emmanuel Kowalski’s blog) that the AMS has just started a repository of open-access mathematics lecture notes.  There are only a few such sets of notes there at present, but hopefully it will grow in the future; I just submitted some old lecture notes of mine from an undergraduate linear algebra course I taught in 2002 (with some updating of format and fixing of various typos).

 

[Update, Dec 22: my own notes are now on the repository.]


Filed under: advertising, math.RA Tagged: linear algebra

December 21, 2016

Sean CarrollMemory-Driven Computing and The Machine

Back in November I received an unusual request: to take part in a conversation at the Discover expo in London, an event put on by Hewlett Packard Enterprise (HPE) to showcase their new technologies. The occasion was a project called simply The Machine — a step forward in what’s known as “memory-driven computing.” On the one hand, I am not in any sense an expert in high-performance computing technologies. On the other hand (full disclosure alert), they offered to pay me, which is always nice. What they were looking for was simply someone who could speak to the types of scientific research that would be aided by this kind of approach to large-scale computation. After looking into it, I thought that I could sensibly talk about some research projects that were relevant to the program, and the technology itself seemed very interesting, so I agreed stop by London on the way from Los Angeles to a conference in Rome in honor of Georges Lemaître (who, coincidentally, was a pioneer in scientific computing).

Everyone knows about Moore’s Law: computer processing power doubles about every eighteen months. It’s that progress that has enabled the massive technological changes witnessed over the past few decades, from supercomputers to handheld devices. The problem is, exponential growth can’t go on forever, and indeed Moore’s Law seems to be ending. It’s a pretty fundamental problem — you can only make components so small, since atoms themselves have a fixed size. The best current technologies sport numbers like 30 atoms per gate and 6 atoms per insulator; we can’t squeeze things much smaller than that.

So how do we push computers to faster processing, in the face of such fundamental limits? HPE’s idea with The Machine (okay, the name could have been more descriptive) is memory-driven computing — change the focus from the processors themselves to the stored data they are manipulating. As I understand it (remember, not an expert), in practice this involves three aspects:

  1. Use “non-volatile” memory — a way to store data without actively using power.
  2. Wherever possible, use photonics rather than ordinary electronics. Photons move faster than electrons, and cost less energy to get moving.
  3. Switch the fundamental architecture, so that input/output and individual processors access the memory as directly as possible.

Here’s a promotional video, made by people who actually are experts.

The project is still in the development stage; you can’t buy The Machine at your local Best Buy. But the developers have imagined a number of ways that the memory-driven approach might change how we do large-scale computational tasks. Back in the early days of electronic computers, processing speed was so slow that it was simplest to store large tables of special functions — sines, cosines, logarithms, etc. — and just look them up as needed. With the huge capacities and swift access of memory-driven computing, that kind of “pre-computation” strategy becomes effective for a wide variety of complex problems, from facial recognition to planing airline routes.

It’s not hard to imagine how physicists would find this useful, so that’s what I briefly talked about in London. Two aspects in particular are pretty obvious. One is searching for anomalies in data, especially in real time. We’re in a data-intensive era in modern science, where very often we have so much data that we can only find signals we know how to look for. Memory-driven computing could offer the prospect of greatly enhanced searches for generic “anomalies” — patterns in the data that nobody had anticipated. You can imagine how that might be useful for something like LIGO’s search for gravitational waves, or the real-time sweeps of the night sky we anticipate from the Large Synoptic Survey Telescope.

The other obvious application, of course, is on the theory side, to large-scale simulations. In my own bailiwick of cosmology, we’re doing better and better at including realistic physics (star formation, supernovae) in simulations of galaxy and large-scale structure formation. But there’s a long way to go, and improved simulations are crucial if we want to understand the interplay of dark matter and ordinary baryonic physics in accounting for the dynamics of galaxies. So if a dramatic new technology comes along that allows us to manipulate and access huge amounts of data (e.g. the current state of a cosmological simulation) rapidly, that would be extremely useful.

Like I said, HPE compensated me for my involvement. But I wouldn’t have gone along if I didn’t think the technology was intriguing. We take improvements in our computers for granted; keeping up with expectations is going to require some clever thinking on the part of engineers and computer scientists.

December 20, 2016

Doug NatelsonMapping current at the nanoscale - part 2 - magnetic fields!

A few weeks ago I posted about one approach to mapping out where current flows at the nanoscale, scanning gate microscopy.   I had made an analogy between current flow in some system and traffic flow in a complicated city map.  Scanning gate microscopy would be analogous recording the flow of traffic in/out of a city as a function of where you chose to put construction barrels and lane closures.  If sampled finely enough, this would give you a sense of where in the city most of the traffic tends to flow.

Of course, that's not how utilities like Google Maps figure out traffic flow maps or road closures.  Instead, applications like that track the GPS signals of cell phones carried in the vehicles.  Is there a current-mapping analogy here as well?  Yes.  There is some "signal" produced by the flow of current, if only you can have a sufficiently sensitive detector to find it.  That is the magnetic field.  Flowing current density \(\mathbf{J}\) produces a local magnetic field \(\mathbf{B}\), thanks to Ampere's law, \(\nabla \times \mathbf{B} = \mu_{0} \mathbf{J}\).
Scanning SQUID microscope image of x-current density 
in a GaSb/InAs structure, showing that the current is 
carried by the edges.  Scale bar is 20 microns.  Image 



Fortunately, there now exist several different technologies for performing very local mapping of magnetic fields, and therefore the underlying pattern of flowing current in some material or device.  One older, established approach is scanning Hall microscopy, where a small piece of semiconductor is placed on a scanning tip, and the Hall effect in that semiconductor is used to sense local \(B\) field.

Scanning NV center microscopy to see magnetic fields,
Scale bars are 400 nm.
Considerably more sensitive is the scanning SQUID microscope, where a tiny superconducting loop is placed on the end of a scanning tip, and used to detect incredibly small magnetic fields.  Shown in the figure, it is possible to see when current is carried by the edges of a structure rather than by the bulk of the material, for example.

A very recently developed method is to use the exquisite magnetic field sensitive optical properties of particular defects in diamond, NV centers.  The second figure (from here) shows examples of the kinds of images that are possible with this approach, looking at the magnetic pattern of data on a hard drive, or magnetic flux trapped in a superconductor.  While I have not seen this technique applied directly to current mapping at the nanoscale, it certainly has the needed magnetic field sensitivity.  Bottom line:  It is possible to "look" at the current distribution in small structures at very small scales by measuring magnetic fields.

December 16, 2016

Sean CarrollQuantum Is Calling

Hollywood celebrities are, in many important ways, different from the rest of us. But we are united by one crucial similarity: we are all fascinated by quantum mechanics.

This was demonstrated to great effect last year, when Paul Rudd and some of his friends starred with Stephen Hawking in the video Anyone Can Quantum, a very funny vignette put together by Spiros Michalakis and others at Caltech’s Institute for Quantum Information and Matter (and directed by Alex Winter, who was Bill in Bill & Ted’s Excellent Adventure). You might remember Spiros from our adventures emerging space from quantum mechanics, but when he’s not working as a mathematical physicist he’s brought incredible energy to Caltech’s outreach programs.

Now the team is back again with a new video, this one titled Quantum is Calling. This one stars the amazing Zoe Saldana, with an appearance by John Cho and the voices of Simon Pegg and Keanu Reeves, and of course Stephen Hawking once again. (One thing about Caltech: we do not mess around with our celebrity cameos.)

If you’re interested in the behind-the-scenes story, Zoe and Spiros and others give it to you here:

If on the other hand you want all the quantum-mechanical jokes explained, that’s where I come in:

Jokes should never be explained, of course. But quantum mechanics always should be, so this time we made an exception.

John PreskillZoe Saldana Answers the Quantum Call

qic-header

Stephen Hawking & Zoe Saldana try to save Simon Pegg’s cat

Watch Quantum Is Calling with Zoe Saldana, Stephen Hawking, Keanu Reeves, Paul Rudd, Simon Pegg, and John Cho. 

We are on the verge of a quantum revolution. Like in the days of the space race, technology has brought an impossibly distant frontier to our doorstep. Just over 17 years ago Michael Crichton wrote a parallel universe-hopping adventure, Timeline, whose fundamental transportation technology required the advent of quantum computing – a concept that was still only theoretical at the time. Today, IBM’s five-quantum bit (or qubit) array is at the fingertips of anyone within reach of the cloud. Google is building a fifty-qubit array. Microsoft is bankrolling a brain trust that will build a quantum computer based on topological qubits. Intel is investing $50 million on spin qubit technology. The UK has announced a £270 million program, and the EU a €1 billion program, to develop quantum technologies. And even more quantum circuits are on the way; the equivalent of competing classes of space shuttles. Only these crafts aren’t meant to travel through space, or even time. They travel through the complete unknown. Qubits fluctuate between the infinite universes of possibility, their quantum states based inherently on uncertainty. And the best way to harness that seemingly unlimited computing power, and take the first steps into the quantum frontier, is through the elusive concept of entanglement.

qubit

So then, the quantum crafts are ready; the standby lights on their consoles blinking in a steady yellow cadence. What we’re missing are the curiosity-driven pilots willing to grapple with the uncertain and unpredictable.

The quantum mechanics property of entanglement was discovered by Albert Einstein, Boris Podolsky, and Nathan Rosen and soon after described in a famous 1935 paper. Einstein called it “spooky action at a distance.” Virtually all of his contemporaries, including Edwin Schrödinger who coined the term “entanglement”, and the entire subsequent generation of physicists would struggle with this paradox. Although their struggles would be necessary to arrive at this particular moment in time, this precipice, their collective and prodigious minds were, and remain to be, handcuffed by training and experiences rooted in a classical understanding of the laws of nature – derived from phenomena that can be seen or felt, either directly or indirectly. Quantum entanglement, on the other hand, presents a puzzle of a fundamentally abstract nature.

rudd-v-hawking

Paul Rudd & Stephen Hawking chatting it up

When Paul Rudd defeated Stephen Hawking in a game of quantum chess – a game built from the ground up with a quantum mechanical set of moves leveraging superposition and entanglement – our intent was to suggest that an entirely new generation of physicists can emerge with an intuitive understanding of entanglement, even before having to dip their toes in mathematics.

Language, Young Lady

Following up on Anyone Can Quantum, the challenges were to (1) further introduce and elaborate on quantum entanglement and (2) reach a wider audience, particularly women. Coming from a writer’s perspective, my primary concern was to make the abstract concept of entanglement somehow relatable. Popular stories, at their most basic, are told through interactions between people in relationships. Only through relational interactions can characters be challenged enough to affect a change in behavior, and as a result support a theme. Early story concepts evolved from the idea that any interaction with entanglement would result in a primary problem of miscommunication. Entanglement, in any form approaching personification, would be fully alien and incomprehensible. Language then, I decided, would become the fabric by which we could create a set of interactions between a human and entanglement.

arrival-movie-4-e1471529984165

Dr. Louise Banks (Amy Adams) & Ian Donnelly (Jeremy Renner) in Arrival

This particular dynamic was tackled in the recent movie Arrival. There, the fictional linguist Dr. Louise Banks is tasked with translating the coffee-ring-stain sign language of a visiting alien civilization before one of the world’s many nervous armies attacks them and causes an intergalactic incident. In the process of decoding the dense script, the controversial Sapir-Whorf theory is brought up introducing the idea that language shapes the way people think. While this theory may or may not hold snow, I am still impressed with the notion that a shared, specific, and descriptive language is necessary to collaborate and innovate. This impression is supported by my own experience in molecular and cell biology research in which communicating new findings always requires expending a tremendous amount of energy crafting a new and appropriate set of terms, or in other words, an expansion of the language.

Marvel To The Rescue

aether_tesseract_groot-0

The Tesseract & Groot in Guardians of the Galaxy

To drive their building, multi-threaded Infinity Stones storyline, the Marvel Cinematic Universe (MCU) has been fortuitously bold in broaching quantum physics concepts and attempting to ground them in real science, taking advantage of the contacts available through the Science & Entertainment Exchange. Through these consultations, movies like Thor and Ant-Man have already delivered to a wide and diverse audience complex concepts such as Einstein-Rosen bridges (wormholes) and the Quantum Realm.

The Ant-Man consultation, in particular, resulted in a relationship between IQIM’s own Spyridon Michalakis (aka Spiros) and Ant-Man himself, Paul Rudd. This relationship was not only responsible for Anyone Can Quantum, but it was also the reason why Spiros was invited to be a panelist at the Silicon Valley Comic Con earlier this year, where he was interviewed by science journalist Zuberoa “Zube” Marcos of the global press outfit, El Pais, a woman who would end up playing a central role in getting Quantum Is Calling off the ground.

Leonard Susskind Juan Maldacena

So the language of quantum physics was being slowly introduced to a wider, global population thanks to the Marvel films. It occurred to us that we had the opportunity to explain some of the physics concepts brought up by the MCU through the lens of quantum physics, and entanglement in particular. The one element of the MCU storylines that was most attractive to us was the Tesseract and its encased Space Stone. It was the first of the Infinity Stones introduced (in Captain America: The First Avenger) and the one that drove the plot of The Avengers, culminating in the creation of a wormhole over Manhattan. For Spiros, the solution was simple: In order to create wormholes, the exotic matter comprising the Space Stone would likely have to exploit entanglement, as described in a conjecture, dubbed “ER=EPR”, published by Leonard Susskind and Juan Maldacena in 2013.

immunity-syndrome

The USS Enterprise (NCC-1701) in the Star Trek TOS episode “The Immunity Syndrome”

Finding Our Star

The remaining challenge was to find the right actress to deliver the new story. The earliest version of our story (back in June, 2016) was based on the crew of the Starship Enterprise encountering an alien creature that was the embodiment of entanglement (a.k.a The Flying Spaghetti Monster), a creature that attempted communication with Earthlings by reciting sound bytes originating from past Earth radio transmissions. In this story iteration, Chief communications officer Uhura would have used her skills to translate the monster’s message amidst rising tension (just like in Arrival).

zoe-saldana-as-lieutenant-uhura-2

Zoe Saldana as Lt. Nyota Uhura

In the subsequent revisions to the story we had to simplify the script and winnow down the cast. We opted to lean on Zoe Saldana’s Uhura. Her character could take on the role of captain, communications officer, and engineer. Zoe was already widely known across multiple sci-fi franchises featuring aliens (namely Star Trek, Guardians of the Galaxy, and Avatar) and her characters have had to speak in or translate those languages.

Zoe = Script

But before approaching Zoe Saldana – and at that point in time, we had no idea how to go about that – we needed to complete a script. Two other incredible resources were available to us: the voices of Dr. Hawking and Keanu Reeves; and we had to make all three work together in a unique comedy – one that did not squander the involvement of either voice, but also served to elevate the role of Zoe.

Stephen Hawking Keanu Reeves

Even in the first version of the story it was my intent to have Keanu Reeves provide the voice for entanglement, expressed through the most alien sounding languages I could imagine. To compress the story to fit our budget we were forced to narrow the list of languages to two, and I chose Dothraki and Navajo. The role of Keanu’s character was to test, recruit, and ultimately invite Zoe Saldana to enter and experience entanglement in the Quantum Realm. Dr. Stephen Hawking would be the reluctant guide that helps Zoe interpret the confusing clues embedded within the Dothraki and Navajo to arrive at the ER=EPR conjecture.

As for the riddle itself, I chose to use two poems from Through the Looking Glass (and What Alice Found There), The Walrus and The Carpenter as well as Haddock’s Eyes, as the reference material, so that those savvy enough to solve even half the riddle on their own would have a further clue pointing them to the final answer.

vlcsnap-2016-12-15-15h15m50s527

Simon Pegg’s cat, Schrodinger (not his actual cat)

The disappearance of Simon’s cat, Schrödinger, had a tripartite function of (a) presenting an inciting incident that urged Zoe to subject herself to the puzzle-solving trial, which we called the Riddle of the Tesseract, (b) to demonstrate the risk of touching the Tesseract and the gravity of her climactic choice, and (c) invoking Schrödinger’s famous thought experiment to present the idea that, in the Quantum Realm, the cat and Zoe are both dead and alive, an uncertainty.

The story was done. And it looked good on paper. But the script was just a piece of paper unless we got Zoe Saldana to sign on.

zube

Zuberoa Marcos

Zoe = Zube

For weeks, Spiros worked all of his connections only to come up empty. It wasn’t until he mentioned our holy quest to Zube (from El Pais and Silicon Valley Comic Con) during an unrelated Skype session that he had the first glimmer of hope, even kismet. Zube had been working on arranging an interview with Zoe for months, an interview that would be taking place three days later in Atlanta. Without even a second thought, Spiros purchased a plane ticket and was on his way to Atlanta two days later. Watching the interview take place, he heard Zoe answer one of Zube’s question about what kind of technology interested her the most. It was the transporter, the teleportation machine used by the crew of the Enterprise to shift matter to and from surfaces of alien planets. This was precisely the kind of technology we were interested in describing at a quantum level! Realizing this was the opening we needed, Zube nodded over to Spiros and made the introductions.

It turns out Zoe had been fascinated by science fiction since her early childhood, being particularly obsessed with Frank Herbert’s Dune. Moreover, she was interested in playing the role of our lead character. In the weeks that followed, communication proceeded through managers in an attempt to nail down a filming date.

vlcsnap-2016-12-15-15h17m09s244

Mariel, Zoe, and Cicely Saldana

The Dangers of Miscommunication

I probably don’t need to remind you that Zoe Saldana is a core component of three gigantic franchises. That means tight schedules, press conferences, and international travel. Ultimately Zoe said that her travel commitments wouldn’t allow her to film our short. It was back to square one. We were dead in the water. The script was just a piece of paper.

However, for some reason, Spiros and Zube were not willing to concede. Zube found out about Zoe Saldana’s production company Cinestar and got in contact with coordinator Diego Gonzalez, to set up a lunch meeting. At lunch, Diego informed Zube and Spiros that Zoe really wanted to do this, but her team was under the impression that filming for our short video had to take place the week Star Trek: Beyond was to be released (Zoe was arguably busier than the POTUS during that week). Spiros informed Cinestar that we would accommodate whatever date Zoe could be available. Having that hurdle removed paved the way for a concrete film date to be set, October 25th. And now the real work began.

shaun_of_the_dead_0386

Simon Pegg in Shaun of the Dead

Finding Common Language

We had set the story inside Simon Pegg’s house and the script included voice-over dialogue for the superstar, but we had yet to even contact Simon. We had written in a part with Paul Rudd on a voicemail message. And we had also included a sixth character that would knock on the door and force Zoe to make her big decision. On top of that I had incorporated Dothraki and Navajo versions of century-old poems that had yet to be translated into those two languages. While Spiros worked on chasing down the talent, I nervously attempted to make contact with experts in the two languages.

david-peterson-dothraki

David J. Peterson

I remember watching a video of Prof. David J. Peterson, creator of the Dothraki language for HBO’s Game of Thrones, speaking at Google about the process of crafting the language. Some unknown courage surfaced and I hunted down contact information for the famous linguist. I found an old website of his, an email address, and sent and inquiry at about midnight pacific standard time on October 14th, the day before my birthday. Within 45 minutes David had responded with interest in helping out. I was floored. And I couldn’t help geeking out. But more importantly this meant we would have the most accurate translation humanly possible. And when one is working on behalf of Caltech you definitely feel the pressure to be above reproach, or unsullied ;).

wheeler-and-keanu

Keanu Reeves, Jennifer Wheeler, a pumpkin, a highlighter & my left arm

Finding a Navajo translator was comparatively difficult. A couple days after receiving Dr. Peterson’s email, I was in Scottsdale, AZ with my brother. I had previously scheduled the trip so that I could be in attendance at a book-signing featuring two of my favorite authors, as a birthday gift to myself. The event was held at the Poisoned Pen bookstore where many other local authors would regularly hold book-signings. While I was geeking out over meeting my favorite writing duo, as well as over my recent interaction with David Peterson, I was also stressed by the pressure to come through on an authentic Navajo translation. My brother urged me to ask the proprietors of the Poisoned Pen for any leads. And wouldn’t you know it, they had recently hosted a book-signing for the author of a Code Talkers book, and she was local. A morning of emails led to Jennifer Wheeler. We had struck gold. Jennifer had recently overseen Navajo translations of Star Wars: A New Hope and Finding Nemo, complete with voice-overs. There was probably nobody more qualified in the world.

keanu-befuddled

Keanu Reeves as Ted “Theodore” Logan in Bill & Ted’s Excellent Adventure

So it turns out that Navajo is a much more difficult language to translate and speak than I had anticipated. For instance, there are over a hundred vowel sounds. So even though the translation was in good hands, I would be imposing on Keanu Reeves one of the greatest vocal challenges he would ever undertake. Eventually I arranged to have Jennifer on hand during Keanu’s voice recording. Here’s what he had to record (phonetically):

Tsee /da / a / ko / ho / di / say / tsaa, / a / nee / di

aɫ / tso / n’ / shay / ch’aa / go

Echo Papa Romeo / do / do / chxih / da

Bi / nee / yay / bi / zhay / ho / lo / nee / bay / do / bish / go.

vlcsnap-2016-12-15-15h24m29s618

Alex Winter & Zoe Saldana hard at work

Filming Day

After months of planning and weeks of script revisions, filming finally happened at an opulent, palatial residence in the Hollywood Hills (big props to Shaun Maguire and Liana Kadisha for securing the location). Six cats. Three trainers. Lights. Cameras. Zube. Zoe Saldana actually showed up! Along with her sisters, Cinestar, and even John Cho! Spiros had gotten assurances from Simon Pegg that he would lend his name and golden voice so we were able to use the ridiculous “Simon’s Peggs” wood sign that we had crafted just for the shoot. Within a few busy hours we were wrapped. All the cats and props were packed and back in LA traffic, where we all seem to exist more often than not. Now the story was left to the fate of editing and post-production.

 

In Post

Unlike the circumstances involved with Anyone Can Quantum, for which there was a fast approaching debut date, Spiros and myself actually had time to be an active part of the post-production process. Alex Winter, Trouper Productions, and STITCH graciously involved us through virtually every step.

One thing that became quite apparent through the edits was the lack of a strong conclusion. Zoe’s story was designed to be somewhat open-ended. Although her character arc was meant to reach a conclusion with the decision to enter the Quantum Realm, it was clear that the short still needed a clear resolution.

matrix-seraphim

What Seraph looks like as code in the Matrix Reloaded

Through much debate and workshopping, Spiros and I finally arrived at bookend scenes that took advantage of Keanu Reeve’s emblematic representation of, and inescapable entanglement with, The Matrix. Our ultimate goal is to create stories that reflect the quantum nature of the universe, the underlying quantum code that is the fabric from which all things emerge, exist, and interact. So, in a way, The Matrix wasn’t that far off.

Language Is Fluid

LIQUi|> (“liquid”), or Language-Integrated Quantum Operations, is an architecture, programming language, and tools suite designed for quantum computing that is being developed by the Microsoft team at Quantum Architectures and Computation Group (or QuArC). Admittedly taking a few liberties, on Spiros’s advice I used actual LIQUi|> commands to create a short script that established a gate (or data structure) that I called Alice (which is meant to represent Zoe and her location), created an entanglement between Alice and the Tesseract, then teleported the Tesseract to Alice. You’ll notice that the visual and sound effects are ripped right from The Matrix.

This set up the possibility of adapting Neo’s famous monologue (from the end of the original Matrix) so we could hint that Zoe was somewhere adrift within the quantum code that defines the Quantum Realm. Yes, both Spiros and I were in the studio when Keanu recorded those lines (along with his lines in Dothraki and Navajo). Have I mentioned geeking out yet? An accompanying sequence of matrix code, or digital rain, had to be constructed that could accommodate examples of entanglement-related formulas. As you might have guessed, the equations highlighted in the digital rain at the end of the short are real, most of which came from this paper on emergent space (of which Spiros is a co-author).

keanu-and-keanu

Keanu Reeves & Keanu

Listen To Your Friend Keanu Reeves. He’s A Cool Dude.

With only a few days left before our debut date, Simon Pegg, Stephen Hawking and Paul Rudd all came through with their voice-over samples. Everything was then stitched together and the color correction, sound balancing, and visual effects were baked into the final video and phew. Finally, and impossibly, through the collaboration of a small army of unique individuals, the script had become a short movie. And hopefully it has become something unique, funny, and inspiring, especially to any young women (and men) who may be harboring an interest in, or a doubt preventing them from, delving into the quantum realm.


December 14, 2016

Jacques Distler MathML Update

For a while now, Frédéric Wang has been urging me to enable native MathML rendering for Safari. He and his colleagues have made many improvements to Webkit’s MathML support. But there were at least two show-stopper bugs that prevented me from flipping the switch.

Fortunately:

  • The STIX Two fonts were released this week. They represent a big improvement on Version 1, and are finally definitively better than LatinModern for displaying MathML on the web. Most interestingly, they fix this bug. That means I can bundle these fonts1, solving both that problem and the more generic problem of users not having a good set of Math fonts installed.
  • Thus inspired, I wrote a little Javascript polyfill to fix the other bug.

While there are still a lot of remaining issues (for instance this one fixed), I think Safari’s native MathML rendering is now good enough for everyday use (and, in enough respects, superior to MathJax’s) to enable it by default in Instiki, Heterotic Beast and on this blog.

Of course, you’ll need to be using2 Safari 10.1 or Safari Technology Preview.

Update:

Another nice benefit of STIX Two fonts is that itex can support both Chancery (\mathcal{}) and Roundhand (\mathscr{}) symbols \mathcal{}: 𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵 \mathscr{}: 𝒜ℬ𝒞𝒟ℰℱ𝒢ℋℐ𝒥𝒦ℒℳ𝒩𝒪𝒫𝒬ℛ𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵 \begin{split} \backslash\mathtt{mathcal}\{\}:&\,\mathcal{ABCDEFGHIJKLMNOPQRSTUVWXYZ}\\ \backslash\mathtt{mathscr}\{\}:&\,\mathscr{ABCDEFGHIJKLMNOPQRSTUVWXYZ} \end{split}

1 In an ideal world, OS vendors would bundle the STIX Two fonts with their next release (as Apple previously bundled the STIX fonts with MacOSX ≥10.7) and motivated users would download and install them in the meantime.

2 N.B.: We’re not browser-sniffing (anymore). We’re just checking for MathML support comparable to Webkit version 203640. If Google (for instance) decided to re-enable MathML support in Chrome, that would work too.

December 12, 2016

Steinn SigurðssonVikings, Santa & Jól

Grímfrost in Sweden give their take on the Meaning of the Season

– the Goat of Þór is serious business though…

Steinn SigurðssonStekkjastaur – the Elfs are Coming – Pt 1 Revisited

Today is the 12th of december, and there are 13 days until christmas.
This means, of course, that the first of the yule elves came to town this morning.

As you know, Bob, there are thirteen of the Yule Lads, or jólasveinar, as we call them.
And they are not really elves, since their mother is a troll.
The childstealing, cannibal Grýla, of legend.

Stekkjastaur

Stekkjastaur

They come to town, one each day until christmas eve, and then leave in order, starting christmas day and finishing on the 6th of January.
They leave small treats or presents in the shoes of good children, if the kids know to leave their shoe out by the door or window. Strangely, our neighbour kids do not seem to have caught on to this, yet.
If you are naughty, you get a potato, or an onion.

Their arrival is critical, since any child who is so naughty as to not get a single piece of clothing, candle or a game before Christmas Eve, will be eaten by Jólakötturinn (the “Christmas Cat” – big as a house it is, silent, deadly).

Jólakötturinn

Jólaköttur – feline solsticus

The lads are pranskters, and quite mean ones, none of your Ho, Ho Ho! Coke swilling softies. Stekkjastaur sneaks up on the ewes and sucks the milk out of their udders, though this is hard going as he is stilt-legged.

On the other hand, the munchkins find it very convenient to be in a multicultural family, especially since the Better Half is fond of the Feast of St Nicholas; my extended family feels presents should be given on christmas eve after dinner; while the in-laws go with the anglo-style stocking and christmas morning thing.

And the one time the Big Kid got a potato, she laughed so hard she fell over, and then came into the kitchen with a big grin and asked that we cook it for her… (it got donated to local wildlife as a compromise).

We are of course not talking superstition.
I mean, I don’t “believe” in elves.
I just know not to mess with them.
You leave their houses alone, leave a seat (1st class, natch) on the occasional flight for them, and maybe put out the occasional bowl of milk… elves don’t need your belief, and trolls of course don’t care, they just eat you.
I should note that Iceland’s one Nobel laureate treated the issue of elves in Icelandic culture extensively, so there.

Just remember, you must have an evergreen for the solstice festival, and you better burn it when you are done, after 12 days, of course.
If you do not, winter may never end!

Next one up, any hour now, is Giljagaur. Sneaky one.

repost from ’08

John PreskillThe weak shall inherit the quasiprobability.

Justin Dressel’s office could understudy for the archetype of a physicist’s office. A long, rectangular table resembles a lab bench. Atop the table perches a tesla coil. A larger tesla coil perches on Justin’s desk. Rubik’s cubes and other puzzles surround a computer and papers. In front of the desk hangs a whiteboard.

A puzzle filled the whiteboard in August. Justin had written a model for a measurement of a quasiprobability. I introduced quasiprobabilities here last Halloween. Quasiprobabilities are to probabilities as ebooks are to books: Ebooks resemble books but can respond to touchscreen interactions through sounds and animation. Quasiprobabilities resemble probabilities but behave in ways that probabilities don’t.

tesla-coil-2

A tesla coil of Justin Dressel’s

 

Let p denote the probability that any given physicist keeps a tesla coil in his or her office. p ranges between zero and one. Quasiprobabilities can dip below zero. They can assume nonreal values, dependent on the imaginary number i = \sqrt{-1}. Probabilities describe nonquantum phenomena, like tesla-coil collectors,1 and quantum phenomena, like photons. Quasiprobabilities appear nonclassical.2,3

We can infer the tesla-coil probability by observing many physicists’ offices:

\text{Prob(any given physicist keeps a tesla coil in his/her office)}  =  \frac{ \text{\# physicists who keep tesla coils in their offices} }{ \text{\# physicists} } \, . We can infer quasiprobabilities from weak measurements, Justin explained. You can measure the number of tesla coils in an office by shining light on the office, correlating the light’s state with the tesla-coil number, and capturing the light on photographic paper. The correlation needn’t affect the tesla coils. Observing a quantum state changes the state, by the Uncertainty Principle heralded by Heisenberg.

We could observe a quantum system weakly. We’d correlate our measurement device (the analogue of light) with the quantum state (the analogue of the tesla-coil number) unreliably. Imagining shining a dull light on an office for a brief duration. Shadows would obscure our photo. We’d have trouble inferring the number of tesla coils. But the dull, brief light burst would affect the office less than a strong, long burst would.

Justin explained how to infer a quasiprobability from weak measurements. He’d explained on account of an action that others might regard as weak: I’d asked for help.

whiteboard

Chaos had seized my attention a few weeks earlier. Chaos is a branch of math and physics that involves phenomena we can’t predict, like weather. I had forayed into quantum chaos for reasons I’ll explain in later posts. I was studying a function F(t) that can flag chaos in cold atoms, black holes, and superconductors.

I’d derived a theorem about F(t). The theorem involved a UFO of a mathematical object: a probability amplitude that resembled a probability but could assume nonreal values. I presented the theorem to my research group, which was kind enough to provide feedback.

“Is this amplitude physical?” John Preskill asked. “Can you measure it?”

“I don’t know,” I admitted. “I can tell a story about what it signifies.”

“If you could measure it,” he said, “I might be more excited.”

You needn’t study chaos to predict that private clouds drizzled on me that evening. I was grateful to receive feedback from thinkers I respected, to learn of a weakness in my argument. Still, scientific works are creative works. Creative works carry fragments of their creators. A weakness in my argument felt like a weakness in me. So I took the step that some might regard as weak—by seeking help.

 

drizzle

Some problems, one should solve alone. If you wake me at 3 AM and demand that I solve the Schrödinger equation that governs a particle in a box, I should be able to comply (if you comply with my demand for justification for the need to solve the Schrödinger equation at 3 AM).One should struggle far into problems before seeking help.

Some scientists extend this principle into a ban on assistance. Some students avoid asking questions for fear of revealing that they don’t understand. Some boast about passing exams and finishing homework without the need to attend office hours. I call their attitude “scientific machismo.”

I’ve all but lived in office hours. I’ve interrupted lectures with questions every few minutes. I didn’t know if I could measure that probability amplitude. But I knew three people who might know. Twenty-five minutes after I emailed them, Justin replied: “The short answer is yes!”

sun

I visited Justin the following week, at Chapman University’s Institute for Quantum Studies. I sat at his bench-like table, eyeing the nearest tesla coil, as he explained. Justin had recognized my probability amplitude from studies of the Kirkwood-Dirac quasiprobability. Experimentalists infer the Kirkwood-Dirac quasiprobability from weak measurements. We could borrow these experimentalists’ techniques, Justin showed, to measure my probability amplitude.

The borrowing grew into a measurement protocol. The theorem grew into a paper. I plunged into quasiprobabilities and weak measurements, following Justin’s advice. John grew more excited.

The meek might inherit the Earth. But the weak shall measure the quasiprobability.

With gratitude to Justin for sharing his expertise and time; and to Justin, Matt Leifer, and Chapman University’s Institute for Quantum Studies for their hospitality.

Chapman’s community was gracious enough to tolerate a seminar from me about thermal states of quantum systems. You can watch the seminar here.

1Tesla-coil collectors consists of atoms described by quantum theory. But we can describe tesla-coil collectors without quantum theory.

2Readers foreign to quantum theory can interpret “nonclassical” roughly as “quantum.”

3Debate has raged about whether quasiprobabilities govern classical phenomena.

4I should be able also to recite the solutions from memory.