header applications

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules

GROMACS is one of the most widely used open-source and free software codes in chemistry, used primarily for dynamical simulations of biomolecules (→ see this article for more information). It is a molecular dynamics calculations toolbox providing a rich set of calculation types, preparation and analysis tools. It runs well on almost all of the recent and past hardware platforms because of well implemented parallel algorithms and modern programming software design.

 

NCSA role in DEEP-EST project

 

NCSA responsibilities in the project encompass installation and performance optimizations of the MD simulator and the other tools for running on the Extreme Scale Booster, Data Analytics Module and Cluster Module, respectively.

 

Depending on the problem size of the simulation run GROMACS runs on different parts of the DEEP-EST system: 

 

Workflow Tk 1.3

Small runs will run on the CM and medium size simulation will run on the ESB/DAM. When running on the ESB/DAM all computationally intensive parts will be executed on the GPUs while CPUs will manage data transactions and CUDA kernel execution.

 

For larger simulations one needs the GPUs from the ESB but also stronger CPUs like on the CM. So large simulation run in an offload version on CM+ESB:

WP1 Workflow

Being able to use the advantages of two different modules improves the performance of large GROMACS runs on the DEEP-EST system:

 

Here a use case with 80 million atoms is used to scale on the DEEP-EST ESB. Each ESB node here as a partner CM node for the PME calculations.

 

MD simulations with big volumes – in the order of several million nm3– quickly becomes inefficient or even impossible. The Fast Multipole Method (FMM)[1] is gaining significant attention in the MD community lately, namely because of its O(N) complexity compared with the O(N*logN) complexity of PME-style methods. The GPUs utilized in ESB promise to reduce the calculation times an order of magnitude and thus make the FMM method competitive[2] [3] [4] to PME in MD simulations of large volumes. In DEEP-EST, NCSA developed a multi-GPU FMM implementation including the IRIS electrostatics library[5] that runs on the ESB. The following application mapping is now possible:

The GROMACS + IRIS/FMM weak scalability, comparing 8- to 1- ESB nodes, 16- to 2- ESB nodes and 32- to 4- ESB nodes is shown in the figure below:

 

 

[1] Rokhlin, Vladimir (1985). "Rapid Solution of Integral Equations of Classic Potential Theory." J. Computational Physics Vol. 60, pp. 187–207.

[2] Rio Yokota, Tsuyoshi Hamada, Jaydeep P. Bardhan, Matthew G. Knepley, Lorena A. Barba: Biomolecular Electrostatics Simulation by an FMM-based BEM on 512 GPUs. CoRR abs/1007.4591 (2010)

[3] http://www.sppexa.de/

[4] Kohnke, B., Kutzner, C., & Grubmüller, H. (2020). A GPU-accelerated fast multipole method for GROMACS: Performance and accuracy. Journal of Chemical Theory and Computation, 16(11), 6938-6949. doi:10.1021/acs.jctc.0c00744.

[5] https://github.com/vpavlov/iris