[slurm-users] allocate last MPI-rank to an exclusive node?
Hendryk Bockelmann
bockelmann at dkrz.de
Tue Feb 19 09:48:32 UTC 2019
Hi,
we had the same issue and solved it by using the 'plane' distribution in
combination with MPMD style srun, e.g. in your example
#SBATCH -N 3 # 3 nodes with 10 cores each
#SBATCH -n 21 # 21 MPI-tasks in sum
#SBATCH --cpus-per-task=1 # if you do not want hyperthreading
cat > mpmd.conf << EOF
0-19 ./slave_app
20 ./master_app
EOF
srun--distribution=plane=10 --multi-prog mpmd.conf
More examples are given here:
https://slurm.schedmd.com/dist_plane.html
Best regards,
Hendryk
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4973 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190219/ec05792d/attachment-0001.bin>
More information about the slurm-users
mailing list