This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

CERN will provide an extreme big-data application drawn from the High Energy Physics domain that will be deployed via DEEP‑EST. It will demonstrate the feasibility of an integrated data refresh and reduction center for deploying dynamic improvements in code and calibration for eventual analysis use at the CMS experiment. 

But what does CERN do exactly? 




At CERN, physicists and engineers are probing the fundamental structure of the universe. They use the world’s largest and most complex scientific instruments to study the basic constituents of matter — the fundamental particles. The particles are made to collide at close to the speed of light (99.9 %). This process gives the physicists clues about how the particles interact, and provides insights into the fundamental laws of nature.
The instruments used at CERN are purpose-built particle accelerators and detectors. Accelerators boost beams of particles to high energies before the beams are made to collide with each other or with stationary targets. Detectors observe and record the results of these collisions.
CERN is home to the Large Hadron Collider (LHC), the world’s largest and most powerful particle accelerator. It consists of a 27-kilometre ring of superconducting magnets, with a number of accelerating structures to boost the energy of the particles along the way. In the course of the CERN High Luminosity LHC project, the LHC is being upgraded to achieve instantaneous luminosities a factor of five larger than the LHC nominal value, thereby enabling the experiments to enlarge their data sample by one order of magnitude compared with the LHC baseline programme. Therefore, the efficient usage of High Performance Computing (HPC) facilities has the potential to be one of the key ingredients in helping to tackle the computing challenges related to the future programme of the Large Hadron Collider (LHC). A substantial amount of R&D investigations is being performed in order to harness the power provided by such machines.  
 CERN and the DEEP-EST Project 
Within the context of the DEEP-EST Project, CERN, together with the CMS experiment, participates as one of several scientific applications brought onboard to drive the co-design and development of a prototype for a new of its kind Modular Supercomputer Architecture (MSA). The ability to contribute to the design of the future generations of HPC machines is crucial to a successful utilization of such facilities by all LHC experiments. Therefore, two applications of the CMS experiment are being worked on in order to deploy on the DEEP-EST prototype.
The first application involves enabling CMS reconstruction utilize heterogeneous computing resources (GPUs and FPGAs) and porting local reconstruction of CMS Calorimeters (ECAL and HCAL) to make the most efficient use of GPU accelerators. Similar activities are being performed within the CMS experiment, trying to optimize the software stack to make efficient use of heterogeneous computing resources to be provided by future HPC facilities.
The second application features a full CMS analysis pipeline, where results of reconstruction are being used to trigger different types of collisions and/or derive physics quantities of interest using various Machine Learning and Deep Learning algorithms. These two CMS applications/use-cases provide completely different workloads, but exercise two of the most important parts of the whole CMS data processing stack. Moreover, they use parts which traditionally did not utilize HPC resources efficiently and this is where DEEP-EST project provides a unique platform for CERN to take advantage of this opportunity to contribute to the co-design of the DEEP-EST prototype.
Fast Simulation for LHC Monte Carlo detector simulation
Another interesting workflow to test in the future will be Fast Simulation for LHC Monte Carlo detector simulation. Generative models approaches are used, in particular with 3D conditional Generative Adversarial Networks (GAN); significant speed-up for the training process can be achieved in a MPI based distributed parallel approach, thus providing very complementary requirements to HPC. CERN’s contribution to the project is carried out through CERN openlab. CERN openlab is a public-private partnership between CERN and leading ICT companies and research institutions. CERN openlab’s mission is to accelerate the development of cutting-edge ICT solutions for the worldwide LHC community and wider scientific research.