header news2

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

 

Q: In most Exascale panels or BoF sessions, hardware is in the limelight. Why do you want to talk about software, or more specifically programming models, and why do you think now is the perfect timing?

Valeria: Imagine you have the coolest kitchen in the world, with state-of-the-art electrical appliances, but you do not have a cookery book. What good does this kitchen do? We are here to provide the cookery book, in other words the programming models enabling you to handle the fancy new hardware. Programming models allow an abstraction of application software from hardware. These are essential as hardware gets more and more complex towards Exascale.

At this year’s ISC is the perfect time to highlight the research on programming models, because we are in the situation that two European projects on programming models hand over the torch. The EPiGRAM project, which finishes end of the year is improving both the Message Passing Interface (MPI) and the Partitioned Global Address Space (PGAS) approach GASPI/GPI. The recently started INTERTWINE project picks up the developments of EPiGRAM and looks into interoperability of programming models and runtime systems.

Q: Coming back to hardware – which is very obviously intrinsically linked to the programming models. When considering the clear trend towards heterogeneous systems, the big question is: How can you program such beasts and exploit the hardware as optimal as possible?

Valeria: It is important to distinguish between two cases:

  • Traditional, typical HPC applications which have a long tradition of using supercomputers and
  • New upcoming applications which are paving their way into HPC.

In the case of typical, traditional HPC application software, the codes live much longer than typical hardware cycles last. Programming models as stable interfaces are indispensable, as they ‘buffer’ changes in the hardware as much as possible. This is especially the case with disruptive hardware innovations like Remote Direct Memory Access (RDMA) and many-core systems (be it INTEL MIC or GPUs) which even force developers to rethink the communication patterns of longstanding HPC codes.

In the case of upcoming HPC applications – like for instance Big Data applications or machine learning going HPC – you can use the full force of HPC tools to help parallelise the software. The big advantage here: You do not have to bother about already existing communication patterns. There are tools that relieve app developers from the difficult parallelisation efforts. These help to make sure that on the one hand the intention of the domain expert is safeguarded and that on the other hand parallelisation is taken care of on all software layers resulting in very efficient codes. Take for instance the GPI-SPACE software (http://www.gpi-space.de/) developed at Fraunhofer ITWM. Even though it has the programming model GPI as part of it’s name it is independent of GASPI/GPI by now and a perfect match for upcoming HPC apps.

Q: What steps does the community need to take – e.g. prepare applications or enhance skills in certain areas – to realise the benefits of the research and development being undertaken towards Exascale programming models?

Valeria: At Exascale communication will be THE bottleneck. Due to the disruptive changes in hardware even the traditional HPC applications need to rethink their communication patterns. Hardware-driven RDMA already allows for one-sided and asynchronous programming. From a software point of view we have to prepare applications so that computation and communication phases can overlap. Another aspect are memory hierarchies that the new many-core systems provide. Here you automatically arrive also at a hierarchy of communication models – something we need to address now.

GASPI/GPI the programming model developed at Fraunhofer ITWM is providing an interface using asynchronous, one-sided communication. GASPI/GPI is thread-safe. It provides a good match between state-of-the-art hardware requirements and application needs.

Q: You have been deeply involved in research on future programming models in the last years. Anything you want to highlight from your work?

Valeria: With the objective to run on current Petascale systems to prepare for Exascale GASPI/GPI has been run on one of Germany’s largest supercomputers, the SuperMUC at the Leibniz Supercomputer Centre (LRZ) in Garching, in 2015 and 2016. The Reverse Time Migration (RTM), a seismic imaging technique, has been scaled up over 3 orders of magnitude. Strong scaling with over 70% of parallel efficiency has been shown on up to 64.000 cores. Beyond 1K nodes the domain decomposition produces very small subdomains consisting of boundary elements only. This situation tightens the coupling between subdomains and reduces the overlap of communication by computation. The high scalability runs have been a great success for GASPI/GPI. It shows that GASPI/GPI scales up to petascale and that the promising design of the programming model is realised with real applications. This is a great accomplishment for GASPI/GPI.

Q: What do you expect to be the programming model of an Exascale system? Do you think it will be the evolution of a current one or something completely new?

Valeria: GASPI/GPI, the programming model developed at Fraunhofer ITWM, will play an important role: It possesses all the important features for exascale readiness: one-sidedness, asynchronous behaviour and zero-copies. It puts application in the position to hide communication times completely.

However, it will be difficult to fully port the traditional HPC applications. So interoperability between the programming models will become more and more important. This is an active field of research (e.g. in the INTERTWINE project).

 

About the interviewee

 

Valeria BartschDr. Valeria Bartsch coordinates the EC projects of the High Performance Computing department at the Fraunhofer Institute for Industrial Mathematics (ITWM). She received her Diploma in Physics at the University of Dortmund (2000) and her PhD degree from the University of Karlsruhe (2003). Several postdoctoral positions in the area of experimental particle physics followed. A growing interest in computing lead to the current position.