LHC
After Strings 2004, I spent the weekend in Paris, and hopped on the TGV to Geneva, where I’ve been spending the week at CERN. The big excitement here, of course, is the LHC, which is scheduled to turn on in the summer of 2007. The first half of the week was devoted to a Strings Workshop, where Fabiola Gianotti (of the ATLAS Collaboration) gave the lead-off lecture.
Wednesday afternoon, we got to tour the two main experimental facilities (ATLAS and CMS) and the magnet testing facility.
Since some of my readers aren’t physicists, let me reverse the order of things, and talk about the computing challenges involved. High energy experimental physics has always been on the cutting-edge of computing technology (the WorldWide Web, you’ll recall, was invented here at CERN), but the LHC experiments raise the bar considerably.
Because of the extraordinarily high luminosity of the LHC (), each detector will “see” collisions/s. An impressive amount of computation takes place in the custom ASICs in the fast electronic triggers which whittle those events down to 100 “interesting” events/s. Here, I have already oversimplified the problem. The protons arrive in 25 ns bunches. Each bunch crossing produces about 25 events. 25 ns is a short period of time, shorter than the time it takes a particle traveling at the speed of light to cross the detector. So there are a large number of events happening all-but-simultaneously, each producing hundreds of charged particle tracks. ATLAS’s level-1 triggers have the job of disentangling those multiple events and cutting the event rate from /sec to /s, with a discrimination time of 2 μs. To do this, the level-1 Calorimetry trigger has to analyse more than 3000 Gbits of input data/sec. The level-2 triggers cut the event rate further down to 100 events/s. These 100 events then need to be “reconstructed”. That’s done “offline” at a processor farm. Each event requires about 1 second of processing on a 1000 MIPS processor. The reconstructed event is whittled down to 0.5 MB of data. You can probably guess where I’m headed. 0.5 MB/event × 100 events/s: in a year of running you accumulate … 1.5 petabytes (1.5 million gigabytes) of data1.
And then begins the data analysis …
The construction engineering challenges are equally impressive. The CMS detector (the smaller, but heavier of the two) is a 12,500 tonne instrument (the magnetic return yoke contains more iron than the Eiffel Tower), containing a 6 m diameter superconducting solenoid, operating at 4 Tesla. The various pieces need to be positioned with an precision of 0.01mm, after having been lowered into a pit 100 m underground. The ATLAS cavern, 55 m long, 40 m high and 35 m wide (also located 100m below the surface) is the largest man-made underground structure in the world and the ATLAS detector, 45 m long, 12 m in diameter, just barely squeezes inside.
Engineering challenges aside, what we really care about is the physics.
The LHC will reach a center-of-mass energy of , with the aforementioned luminosity of . Most optimistically, this gives them a “reach” for the discovery of new particles of up to . Realistically, what one can see depends very much on the decay channel. Leptons and γs are very-well discriminated. Fully hadronic final states can only be extracted from the (huge) QCD background with a hard cut, which limits them to only very heavy objects.
To see a very light Higgs (say, 115 GeV) at the level will require a year of running, as the channel requires excellent EM calorimetry. For , the channel opens up, which will be much easier to separate from background. A squark or gluino with a mass less than 1.3 TeV will be seen in less than a month of running. Masses up to 2.5 TeV could be seen in about a year.
With a year of running, all sorts of other beyond-the-Standard-Model physics (from new gauge bosons to TeV-scale blackholes) will be testable too. I’m really very excited about the prospects for our field finally seeing some experimental input again.
1 The total storage requirements expected for the LHC is about 10 petabytes/year. Currently, the largest database in the world is the BaBar Database at SLAC, which holds just shy of a petabyte.
Re: LHC
From your report, it seems that higher masses will be easily seen than lower ones. I am afraid about if they are going after all to use the gained luminosity to recheck old events. Recently I have learn that a 40 GeV event profusely reported in 1984 by UA1 was rejected after additional statistics darkened it and, it seems, no recheck was scheduled. Surely the same will happen with the L3’ 68 GeV I told some weeks ago, and the ALEPH’ 115, which suffered the same pattern of preliminary up, final down, no new experimental input in.
By the way, I’d like to collect all the cases, if only for curiosity. Does anyone know of other singular events?