[slurm-users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

Prentice Bisbal pbisbal at pppl.gov
Mon Dec 18 15:26:42 MST 2017


Slurm users,

I've already posted this question to the OpenMPI and Beowulf lists, but 
I also wanted to post this question here to get more Slurm-specific 
opinions, in case some of you don't subscribe to those lists and have 
meaning input to provide. For those of you that subscribe to one or more 
of these lists, I apologize for making you read this a 3rd time.

We use OpenMPI with Slurm as our scheduler, and a user has asked me 
this: should they use mpiexec/mpirun or srun to start their MPI jobs 
through Slurm?

My inclination is to use mpiexec, since that is the only method that's 
(somewhat) defined in the MPI standard and therefore the most portable, 
and the examples in the OpenMPI FAQ use mpirun. However, the Slurm 
documentation on the schedmd website say to use srun with the --mpi=pmi 
option. (See links below)

What are the pros/cons of using these two methods, other than the 
portability issue I already mentioned? Does srun+pmi use a different 
method to wire up the connections? Some things I read online seem to 
indicate that. If slurm was built with PMI support, and OpenMPI was 
built with Slurm support, does it really make any difference?

https://www.open-mpi.org/faq/?category=slurm
https://slurm.schedmd.com/mpi_guide.html#open_mpi

-- 
Prentice Bisbal
Lead Software Engineer
Princeton Plasma Physics Laboratory
http://www.pppl.gov




More information about the slurm-users mailing list