[slurm-users] Running an MPI job across two partitions

Riebs, Andy andy.riebs at hpe.com
Mon Mar 23 15:42:43 UTC 2020


When you say “distinct compute nodes,” are they at least on the same network fabric?

If so, the first thing I’d try would be to create a new partition that encompasses all of the nodes of the other two partitions.

Andy

From: slurm-users [mailto:slurm-users-bounces at lists.schedmd.com] On Behalf Of CB
Sent: Monday, March 23, 2020 11:32 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: [slurm-users] Running an MPI job across two partitions

Hi,

I'm running Slurm 19.05 version.

Is there any way to launch an MPI job on a group of distributed  nodes from two or more partitions, where each partition has distinct compute nodes?

I've looked at the heterogeneous job support but it creates two-separate jobs.

If there is no such capability with the current Slurm, I'd like to hear any recommendations or suggestions.

Thanks,
Chansup
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200323/a2cd2790/attachment-0001.htm>


More information about the slurm-users mailing list