Benchmarking is an essential element in evaluating the success of a hardware prototyping project. In the DEEP projects we use the JUBE benchmarking environment to assess the performance of the DEEP system.
Benchmarking a computer system usually involves numerous tasks, involving several runs of different applications. Configuring, compiling, and running a benchmark suite on several platforms with the accompanied tasks of result verification and analysis needs a lot of administrative work and produces a lot of data, which has to be analysed and collected in a central database. Without a benchmarking environment all these steps have to be performed by hand.
For each benchmark application the benchmark data is written out in a certain format that enables the benchmarker to deduct the desired information. This data can be parsed by automatic pre- and post-processing scripts that draw information, and store it more densely for manual interpretation.
In the DEEP projects we use the JUBE benchmarking environment that is actively developed by the Jülich Supercomputing Centre.
JUBE provides a script based framework to easily create benchmark sets, run those sets on different computer systems and evaluate the results. Within the DEEP projects, the main focus lies on collecting benchmarking tests to compare
- The different I/O approaches used in DEEP projects.
- The I/O performance in DEEP projects with respect to production systems.
For this purpose, all DEEP co-design applications are integrated into the JUBE benchmarking environment.
The version used in DEEP projects is written in Python and is currently installed on the DEEP Cluster as well as on the DEEP-ER SDV.
For more information, including the full documentation and download of the latest version, please visit: http://www.fz-juelich.de/jsc/jube