Examples of MPI Programs

This page contains two examples of Message Passing Interface (MPI) programs and a sample job run. For more information on MPI, please see the Message Passing Interface (MPI) page. For more information on sourcing the correct MPI compiler for your job, please see the Setting Up a MPI Compiler page.

Example of an MPI Program mpitest.c

/* program hello */
/* Adapted from mpihello.f by drs */

#include <mpi.h>
#include <stdio.h>
#include <unistd.h>

int main(int argc, char **argv)
{
  int rank;
  char hostname[256];

  MPI_Init(&argc,&argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  gethostname(hostname,255);

  printf("Hello world!  I am process number: %d on host %s\n", rank, hostname);

  MPI_Finalize();

  return 0;
}

Example of MPI Program Execution Using mpiexec

The preferred means of starting MPI programs on the Linux cluster is mpiexec. The main advantage of using mpiexec over mpirun is that there is no need to source any setup files before executing your program. However, you must still source the correct setup file before compiling your program. It is also a lot simpler to write the Slurm script file using mpiexec than mpirun. The example MPI program above, mpitest.c, could be executed with the following Slurm script that uses mpiexec.

Sample mpitest.slurm Using srun

#!/bin/bash

#SBATCH --ntasks=8
#SBATCH --time=1:00:00
#SBATCH --export=NONE # This is needed to ensure a proper job environment


mycommand="./mpitest" # note "./" is used to not rely on $PATH
myargs=""         	# arguments for $mycommand
infiles=("")        	# list of input files
outfiles=("")      	# list output files

source /usr/usc/openmpi/default/setup.sh

#Get the number of processors assigned by Slurm
echo "Running on $SLURM_NTASKS processors: $SLURM_NODELIST"

# set up the job and input files, exiting if anything went wrong

cd $SLURM_SUBMIT_DIR || exit 1

for file in ${infiles[@]}; do
   cp $file $TMPDIR
done

cd $TMPDIR || exit 1

# run the command, saving the exit value
echo "Running $mycommand"
srun --ntasks=${SLURM_NTASKS} --mpi=pmi2 $SLURM_SUBMIT_DIR/$mycommand $myargs
# Using srun is the preferred way to launch mpi programs but mpirun will work.


ret=$?

#get the output files
cd "$SLURM_SUBMIT_DIR"

for file in ${outfiles[@]}; do
   cp ${TMPDIR}/${file} $file
done

echo "Done " `date`
exit $ret

Sample Run

To run a job, complete the following steps:

1. Source the setup file:

hpc-login3: source /usr/usc/openmpi/default/setup.sh

2. Compile the program:

hpc-login3: mpicc -o mpitest mpitest.c

3. Submit the .slurm file using sbatch. When job submission is successful, the sbatch command returns the job’s process number:

hpc-login3: sbatch mpitest.slurm
Submitted batch job 12240

Output Files

The output files for the job above are created in the same directory where the sbatch command was executed. By default the output file that is created has a name in the form of slurm-.out

In the example above, the file slurm-12240.out contains both the error and outputs of the program.
The output of the program can be viewed by listing the output files:

hpc-login3: ls -l slurm-*.out
-rw-------    1 ttrojan   rds             0 Mar 22 11:00 slurm-12240.out

hpc-login3: more slurm-12240.out
Running on 8 processors: hpc0632
Running ./mpitest
Hello world!  I am process number: 0 on host hpc0632
Hello world!  I am process number: 1 on host hpc0632
Hello world!  I am process number: 2 on host hpc0632
Hello world!  I am process number: 3 on host hpc0632
Hello world!  I am process number: 4 on host hpc0632
Hello world!  I am process number: 5 on host hpc0632
Hello world!  I am process number: 6 on host hpc0632
Hello world!  I am process number: 7 on host hpc0632
Done  Thu Mar 22 11:00:06 PDT 2018