Q: In most Exascale panels or BoF sessions, hardware is in the limelight. Why do you want to talk about programming models, and why do you think now is the right time to do so?
Mark: We need to talk about programming models for the Exascale because they are the crucial interface between applications and hardware. As hardware designers continue to push the limits of silicon-based technology, it is likely that systems are going to get even harder to program than they are now. Robust and efficient implementations of programming models take time to develop, so we need to be working on them before the hardware appears, not after!
Q: Coming back to hardware – which is very obviously intrinsically linked to the programming models. When considering the clear trend towards heterogeneous systems, the big question is: How can you program such beasts and exploit the hardware as optimal as possible?
Mark: A single programming model that address all levels of parallelism from distributed memory down to vector lanes is the Holy Grail, but we aren’t there yet, and given the time taken for programming models to mature, we need something that will work in the short to medium term. So I think that means making multiple APIs work well together in the same application: we’ve already seen reasonable successes with MPI + OpenMP, for example, and I’d expect that trend to continue.
Q: What steps does the community need to take – e.g. prepare applications or enhance skills in certain areas – to realise the benefits of the research and development being undertaken towards Exascale programming models?
Mark: I think applications will need to make use of some of the newer features in existing programming models - for example the task and target constructs in OpenMP (and not just parallel loops), and single sided versus two sided message passing. Some of these require quite different ways of thinking about parallelism and substantial refactoring of applications, so training developers is going to be very important, as is developing tools to support them in understanding the performance of programs with more irregular and asynchronous behaviour.
Q: All three of you have been deeply involved in research on future programming models in the last years. Anything you want to highlight from your work?
Mark: Not from my personal research, but I’d like to mention the EU project INTERTWinE (http://www.intertwine-project.eu/ ) which I’m co-ordinating which is specifically looking at solving some of the interoperability issues between different HPC programming APIs.
Q: What do you expect to be the programming model of an Exascale system? Do you think it will be the evolution of a current one or something completely new?
Mark: I can see both approaches having a role – there may be some systems that are designed with a particular application or class of applications in mind, where it will be worth the investment to use a bespoke programming API to exploit them. But for more general purpose systems, applications developers will still need the reliability and portability of existing standards, though they may exploit new features and new combinations of programming models to achieve the high scalability needed.
About the interviewee
Dr Mark Bull is a Research and Training Architect at EPCC, University of Edinburgh. Previously he worked at the University of Manchester, where he also obtained his PhD in parallel numerical algorithms. He has long-standing research interests in parallel algorithms, parallel programming models, performance analysis and benchmarking. He is EPCC's representative on the OpenMP ARB, and was the chair of the OpenMP ARB Language Committee from 2003 to 2008, overseeing the production of the 2.5 and 3.0 versions of the OpenMP specifications. He is also the author of a number of synthetic benchmark suites, for OpenMP, hybrid MPI/OpenMP and Java. He is the Principal Investigator for the Horizon 2020 FET-HPC project INTERTWinE, whose main focus is on interoperability between programming models for large scale HPC systems.