Message Passing Interface (MPI)

The Message Passing Interface (MPI) is a library specification that allows HPC to pass information between its various nodes and clusters. HPC uses OpenMPI, an open-source, portable implementation of the MPI standard. OpenMPI contains a complete implementation of version 1.2 of the MPI standard and also MPI-2.

OpenMPI is both a runtime and compile-time environment for MPI-compliant code.

Compilers

OpenMPI supports several different compiler sets, but the user must specify which they wish to use for each job.

OpenMPI supports C, C++, F77, and F90. However not all compiler sets support all languages. Portland Group’s CDK and Intel support all four languages. KCC supports only C++. Absoft supports F77 and F90. GNU supports C, C++, and F77.
OpenMPI is tightly integrated with the operating system and the compiler set being used.

OpenMPI Tutorials

Setting Up Your Job

With so many device and compiler combinations, setting up your environment can seem complicated. HPC has defined preferred bindings to simplify your environment initialization. The preferred bindings work well on the Linux cluster; however, you are free to choose the best setup for your needs.
For most ITS software packages, it is recommended that you use the default link, which points to the default version of the software. For OpenMPI, this is not recommended. Since changing the default link after installing a newer version of OpenMPI will always break currently running or waiting jobs, it is recommended that you always use the path that is associated with that version.

You should look at the links in /usr/usc/openmpi to find the appropriate version. Next, choose a specific binding from those available in that version.

Some examples of other available setup files are:
/usr/usc/openmpi/1.8.8/setup.sh.intel
/usr/usc/openmpi/1.8.8/setup.sh.pgi
/usr/usc/openmpi/1.8.8/setup.sh.gnu

Compiling Your Job

OpenMPI includes wrapper scripts designed to hide the details of compiling with any particular build or compiler. Simply source an OpenMPI build into your shell environment and use mpicc, mpicxx, or mpifort to compile your code. The arguments to the script are passed to the compiler and interpreted normally. The wrapper script will link in the correct libraries and add rpaths as necessary.

Running Your Job

OpenMPI includes a wrapper script called mpirun(1) to hide the details of getting your program to run. To use mpirun, it is critical that the exact same OpenMPI build that was used to build the program is also used to run it (see the mpiexec note below to get around this restriction). After sourcing the correct OpenMPI build, mpirun requires a few arguments to tell it which nodes to run on. The two most common options are -machinefile and -np. Under Slurm, simply pass $SLURM_NTASKS to -np.

Getting Help

Additional information about MPI can be found on the Official Message Passing Interface (MPI) website. To get help with using MPI on HPC, send an email to hpc@usc.edu.