[slurm-users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

rhc at open-mpi.org rhc at open-mpi.org
Mon Dec 18 18:13:18 MST 2017


Repeated here from the OMPI list:

We have had reports of applications running faster when executing under OMPI’s mpiexec versus when started by srun. Reasons aren’t entirely clear, but are likely related to differences in mapping/binding options (OMPI provides a very large range compared to srun) and optimization flags provided by mpiexec that are specific to OMPI.

OMPI uses PMIx for wireup support (starting with the v2.x series), which provides a faster startup than other PMI implementations. However, that is also available with Slurm starting with the 16.05 release, and some further PMIx-based launch optimizations were recently added to the Slurm 17.11 release. So I would expect that launch via srun with the latest Slurm release and PMIx would be faster than mpiexec - though that still leaves the faster execution reports to consider.

HTH
Ralph


> On Dec 18, 2017, at 2:26 PM, Prentice Bisbal <pbisbal at pppl.gov> wrote:
> 
> Slurm users,
> 
> I've already posted this question to the OpenMPI and Beowulf lists, but I also wanted to post this question here to get more Slurm-specific opinions, in case some of you don't subscribe to those lists and have meaning input to provide. For those of you that subscribe to one or more of these lists, I apologize for making you read this a 3rd time.
> 
> We use OpenMPI with Slurm as our scheduler, and a user has asked me this: should they use mpiexec/mpirun or srun to start their MPI jobs through Slurm?
> 
> My inclination is to use mpiexec, since that is the only method that's (somewhat) defined in the MPI standard and therefore the most portable, and the examples in the OpenMPI FAQ use mpirun. However, the Slurm documentation on the schedmd website say to use srun with the --mpi=pmi option. (See links below)
> 
> What are the pros/cons of using these two methods, other than the portability issue I already mentioned? Does srun+pmi use a different method to wire up the connections? Some things I read online seem to indicate that. If slurm was built with PMI support, and OpenMPI was built with Slurm support, does it really make any difference?
> 
> https://www.open-mpi.org/faq/?category=slurm
> https://slurm.schedmd.com/mpi_guide.html#open_mpi
> 
> -- 
> Prentice Bisbal
> Lead Software Engineer
> Princeton Plasma Physics Laboratory
> http://www.pppl.gov
> 
> 




More information about the slurm-users mailing list