Examples of MPI Programs

This page contains two examples of Message Passing Interface (MPI) programs and a sample job run. For more information on MPI, please see the Message Passing Interface (MPI) page. For more information on sourcing the correct MPI compiler for your job, please see the Setting Up a MPI Compiler page.

Example of an MPI Program mpitest.c

/* program hello */
/* Adapted from mpihello.f by drs */

#include <mpi.h>
#include <stdio.h>
#include <unistd.h>

int main(int argc, char **argv)
{
  int rank;
  char hostname[256];

  MPI_Init(&argc,&argv);
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  gethostname(hostname,255);

  printf("Hello world!  I am process number: %d on host %s\n", rank, hostname);

  MPI_Finalize();

  return 0;
}

Example of MPI Program Execution Using mpiexec

The preferred means of starting MPI programs on the Linux cluster is mpiexec. The main advantage of using mpiexec over mpirun is that there is no need to source any setup files before executing your program. However, you must still source the correct setup file before compiling your program. It is also a lot simpler to write the PBS script file using mpiexec than mpirun. The example MPI program above, mpitest.c, could be executed with the following PBS script that uses mpiexec.

Sample mpitest.pbs Using mpiexec

#!/bin/csh
#*** The "#PBS" lines must come before any non-blank non-comment lines ***
#PBS -l walltime=30:00,nodes=4:myri:ppn=2

set mycommand = "./mpitest"  # note "./" is used to not rely on $PATH
set myargs    = ""           # arguments for $mycommand
set infiles   = ""           # list of input files
set outfiles  = ""           # list of output files

if ($?PBS_NODEFILE && $?PBS_JOBID) then
# count the number of processors assigned by PBS
   set NP = `wc -l < $PBS_NODEFILE`
   echo "Running on $NP processors: "`cat $PBS_NODEFILE`
else
   echo "This script must be submitted to PBS with 'qsub -l nodes=X'"
   exit 1
endif

# set up the job and input files, exiting if anything went wrong
cd "$PBS_O_WORKDIR" || exit 1

if ($?infiles) then
   foreach file ($infiles)
      cp -v $file /scratch/ || exit 1
   end
endif

cd /scratch || exit 1

# run the command, saving the exit value
mpiexec $PBS_O_WORKDIR/$mycommand $myargs

set ret = $?
# get the output files
cd "$PBS_O_WORKDIR"

if ($?outfiles) then
   foreach file ($outfiles)
      cp -vf /scratch/`basename $file` $file
   end
endif

echo "Done   " `date`
exit $ret

Sample Run

To run a job, complete the following steps:

    1. Source the setup file:
hpc-login2: source /usr/usc/openmpi/default/setup.csh
    1. Compile the program:
hpc-login2: mpicc -o mpitest mpitest.c
    1. Submit the .pbs file to the queue. When job submission is successful, the qsub command returns the job's process number:
hpc-login2: qsub mpitest.pbs
12240.hpc-pbs.usc.edu

Output Files

The output files for the job above are created in the same directory where the qsub command was executed. The output files are created using the same filename as your PBS script filename with an extension that includes "e" for "error" or "o" for "output" followed by the process number.

In the example above, the file mpitest.pbs.e12240 contains the error messages, and the file mpitest.pbs.o12240 contains the output of the program.

The output of the program can be viewed by listing the output files:

hpc-login2: ls -l mpitest.pbs.*
-rw-------    1 nirmal   rds             0 Jul 16 14:59 mpitest.pbs.e12240
-rw-------    1 nirmal   rds          1442 Jul 16 14:59 mpitest.pbs.o12240

hpc-login2: more mpitest.pbs.*
::::::::::::::
mpitest.pbs.e12240
::::::::::::::
::::::::::::::
mpitest.pbs.o12240
::::::::::::::
----------------------------------------
Begin PBS Prologue Wed Dec  8 16:49:24 PST 2004
Job ID:         12240.hpc-pbs.usc.edu
Username:       nirmal
Group:          rds
Nodes:          hpc0632 hpc0633 hpc0634 hpc0636
PVFS: mounting at /scratch
End PBS Prologue Wed Dec  8 16:49:27 PST 2004
----------------------------------------
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
Starting 12240.hpc-pbs.usc.edu Wed Dec 8 16:49:27 PST 2004
Initiated on hpc0636
Running on 8 processors: hpc0636 hpc0636 hpc0634 hpc0634 hpc0633 hpc0633 hpc0632 hpc0632

Hello world!  I am process number: 4 on host hpc0632
Hello world!  I am process number: 3 on host hpc0634
Hello world!  I am process number: 0 on host hpc0635
Hello world!  I am process number: 2 on host hpc0634
Hello world!  I am process number: 6 on host hpc0631
Hello world!  I am process number: 1 on host hpc0635
Hello world!  I am process number: 5 on host hpc0632
Hello world!  I am process number: 7 on host hpc0631

Done    Wed Dec 8 16:49:28 PST 2004
--------------------------------------------------
Begin PBS Epilogue Wed Dec  8 16:49:33 PST 2004
Job ID:         12240.hpc-pbs.usc.edu
Username:       nirmal
Group:          rds
Job Name:       mpitest.pbs
Session:        4020
Limits:         neednodes=4:myri:ppn=2,nodes=4:myri:ppn=2,walltime=00:02:00
Resources:      cput=00:00:00,mem=12648kb,vmem=25752kb,walltime=00:00:04
Queue:          quick
Account:
Nodes:          hpc0632 hpc0633 hpc0634 hpc0636
Killing leftovers...
End PBS Epilogue Wed Dec  8 16:49:35 PST 2004
-------------------------------------------------