Stanford part of $25 million NSF project to prepare for data deluge from Large Hadron Collider
Stanford’s LAUREN TOMPKINS will collaborate on a $25 million project funded by the National Science Foundation to establish the Institute for Research and Innovation in Software for High Energy Physics (IRIS-HEP).
IRIS-HEP will focus on developing software tools and algorithms to explore and analyze the enormous amount of data that will be generated by the High-Luminosity Large Hadron Collider (LHC), a 17-mile wide synchrotron ring that is located on the French-Swiss border and operated by the Swiss lab CERN.
“In order to realize the full potential of the High Luminosity LHC, we need to become much smarter about how we record and analyze the data. Measuring the Higgs boson, discovering dark matter or finding new particles which may be hiding in our data requires next-generation software and innovative algorithms. Through IRIS-HEP, we’ll be able to collaborate across experiments and with scientists from other disciplines to solve our needle-in a-field-of-haystacks problem,” said Tompkins, assistant professor of physics and the principal investigator for the Stanford portion of the project.
When the HL-LHC reaches full capability in 2026, it will produce more than 1 billion particle collisions every second – a tenfold increase from current levels. Weeding out and recording the most relevant events will be crucial if physicists are to focus on the few that are of interest. For example, the Higgs boson – the long-sought, final piece of the Standard Model of physics that was discovered at the LHC in 2012 – is produced only once out of every 10 billion collisions.
Tompkins, who is a member of the ATLAS collaboration at CERN, is also designing and building a hardware-based pattern recognition system that will complement the software tools developed by IRIS-HEP. Called Fast Tracker, the instrument will be able to conduct a rapid first-pass analysis of LHC data to identify events of interest.
“There might be other particles or rare processes that we’ve predicted but haven’t observed yet. To find and study them, we need to be able to efficiently select the data to analyze,” Tompkins said.
Read this article and more on the School of Humanities and Sciences website.