Abstract

Parallelization of Scientific Applications with MPI


Abstract


Modernizing scientific codes to harness parallel computing is essential for significantly boosting performance and efficiency. However, transitioning from sequential to parallel programming introduces complex challenges, such as managing global variables, addressing aliasing issues, and integrating random number generators and stateful functions. To address these challenges, this paper proposes a semi-automatic methodology designed to simplify the parallelization of applications with minimal redesign effort. This versatile approach supports various parallel computing paradigms, including shared memory systems (via OpenMP), message passing (via MPI), and GPU computing (via OpenACC). The methodology's efficacy is validated by applying it to four real-world physics and materials science codes, demonstrating its broad applicability and substantial impact on advancing scientific computations.




Keywords


Open Multi-Processing(OpenMP); Message Passing Interface(MPI); Open Accelerators(OpenACC); Scalable Local Fourier Analysis (SLFA); Block Diagonal Varying System (BDVS).