[slurm-users] OpenMPI & Slurm: mpiexec/mpirun vs. srun

rhc at open-mpi.org rhc at open-mpi.org
Mon Dec 18 19:35:00 MST 2017


If it truly is due to mapping/binding and optimization params, then I would expect it to be highly application-specific. The sporadic nature of the reports would seem to also support that possibility.

I’d be very surprised to find run time scaling better with srun unless you are using some layout option with one that you aren’t using with another. mpiexec has all the srun layout options, and a lot more - so I suspect you just aren’t using the equivalent mpiexec option. Exploring those might even reveal a combination that runs better :-)

Launch time, however, is a different subject.


> On Dec 18, 2017, at 5:23 PM, Christopher Samuel <chris at csamuel.org> wrote:
> 
> On 19/12/17 12:13, rhc at open-mpi.org wrote:
> 
>> We have had reports of applications running faster when executing under OMPI’s mpiexec versus when started by srun.
> 
> Interesting, I know that used to be the case with older versions of
> Slurm but since (I think) about 15.x we saw srun scale better than
> mpirun (this was for the molecular dynamics code NAMD).
> 
> -- 
> Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC
> 




More information about the slurm-users mailing list