This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

DEEP: A modular supercomputer which adapts to application needs

There’s more than one way to build a supercomputer, and meeting the diverse demands of modern applications, which increasingly combine data analytics and artificial intelligence (AI) with simulation, requires a flexible system architecture. Since 2011, the DEEP series of projects has pioneered an innovative concept known as the modular supercomputer architecture, whereby ‘multiple modules are coupled like building blocks’, explains the project’s coordinator Estela Suárez (Jülich Supercomputing Centre). ‘Each module is tailored to the needs of a specific class of applications, and all modules together behave as a single machine,’ she says.

This article was published in the current issue of HiPEAC Magazine. Read the HiPEAC high-performance computing special feature here: www.hipeac.net

 

 

 

Connected by a high-speed, federated network and programmed in a uniform system software and programming environment, the supercomputer ‘allows an application to be distributed over several hardware modules, running each code component on the one which best suits its particular needs’, according to Estela. Specifically, DEEP-EST, the latest project in the series, is building a prototype with three modules: ‘a general-purpose cluster for low or medium scalable codes, a highly scalable booster comprising a cluster of accelerators, and a Data Analytics Module (DAM)’, which will be tested with six applications combining high-performance computing (HPC) with high-performance data analytics (HPDA) and machine learning (ML). 

 

The DEEP approach is part of the trend towards using accelerators to improve performance and overall energy efficiency – but with a twist. ‘Traditionally, heterogeneity is done within the node, combining a central processing unit (CPU) with one or more accelerators. In DEEP-EST we segregate the resources and pool them into compute modules, as this enables us to flexibly adapt the system to very diverse application requirements,’ Estela says. In addition to usability and flexibility, the sustained performance made possible by following this approach aims to reach exascale levels. 

 

 

 

One important aspect that makes the DEEP architecture stand out is the co-design approach, which is a key component of the project. ‘Only by properly understanding what users and their applications really do can you build a computer that efficiently solves their problems,’ says Estela. ‘In DEEP-EST, we’ve selected six ambitious HPC/HPDA applications to drive the co-design process, which will be used to define and evaluate the hardware and software technologies developed.’ 

 

Careful analysis of the application codes allows a fuller understanding of their requirements, which informed the prototype’s design and configuration, she notes. ‘For instance, we used application benchmarks to choose the CPU version on each prototype module, while memory technologies and capacities have been selected based on the input/output needs of our co-design codes. Each application uses different module combinations, while dynamic scheduling and resource management software ensure the highest possible throughput.

 

In addition to traditional compute-intensive HPC applications, the DEEP-EST DAM includes leading-edge memory and storage technology tailored to the needs of data-intensive workloads, which occur in data analytics and ML. ‘Based on general-purpose processors with a huge amount of memory per core, this module is boosted by powerful general-purpose graphics processing units (GPGPU) and field-programmable gate array (FPGA) accelerators,’ says Estela. In the example of space weather, the data analytics module is ideal for analysing high-resolution satellite images, Estela explains, while other parts of the application workflow – such as the interaction of particles emitted by the Sun with the Earth’s magnetic field, are distributed between the cluster module and the booster.

 

Through the DEEP projects, researchers have shown that combining resources in compute modules efficiently serves applications from multi-physics simulations to simulations integrating HPC with HPDA, to complex heterogeneous workflows such as those in artificial intelligence applications. ‘One of our most significant achievements was delivering a software environment, which allows the “module farm” to work as a single machine,’ says Estela. ‘Now we have tangible results in the form of JURECA, the first modular computing system at production level.’