[slurm-users] Running an MPI job across two partitions
andy.riebs at hpe.com
Mon Mar 23 15:42:43 UTC 2020
When you say “distinct compute nodes,” are they at least on the same network fabric?
If so, the first thing I’d try would be to create a new partition that encompasses all of the nodes of the other two partitions.
From: slurm-users [mailto:slurm-users-bounces at lists.schedmd.com] On Behalf Of CB
Sent: Monday, March 23, 2020 11:32 AM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: [slurm-users] Running an MPI job across two partitions
I'm running Slurm 19.05 version.
Is there any way to launch an MPI job on a group of distributed nodes from two or more partitions, where each partition has distinct compute nodes?
I've looked at the heterogeneous job support but it creates two-separate jobs.
If there is no such capability with the current Slurm, I'd like to hear any recommendations or suggestions.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the slurm-users