[slurm-users] cpu binding and multiple cores per task
Bernstein, Noam CIV USN NRL (6393) Washington DC (USA)
noam.bernstein at nrl.navy.mil
Mon Mar 21 13:49:38 UTC 2022
Is there any documentation of the precise meaning of the bitmask that "--cpu-bind=verbose" reports?
Also, can anyone explain why there's a difference between specifying "--ntasks" and "--cpus-per-task" in the #SBATCH header vs. in the srun command line, as my output below shows?
If I use this sbatch header
#SBATCH --exclusive
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=8
#SBATCH --hint=nomultithread
and "srun --cpu-bind=verbose" I get the following report:
cpu-bind=MASK - compute-7-16, task 0 0 [192319]: mask 0xff set
cpu-bind=MASK - compute-7-16, task 1 1 [192320]: mask 0xff0000 set
cpu-bind=MASK - compute-7-16, task 2 2 [192321]: mask 0xff00 set
cpu-bind=MASK - compute-7-16, task 3 3 [192322]: mask 0xff000000 set
cpu-bind=MASK - compute-7-17, task 4 0 [138387]: mask 0xff set
cpu-bind=MASK - compute-7-17, task 5 1 [138388]: mask 0xff0000 set
cpu-bind=MASK - compute-7-17, task 6 2 [138389]: mask 0xff00 set
cpu-bind=MASK - compute-7-17, task 7 3 [138390]: mask 0xff000000 set
but if I use
#SBATCH --exclusive
#SBATCH --ntasks=64
and "srun --ntasks=8 --cpus-per-task=8 --cpu-bind=verbose" I get
cpu-bind=MASK - compute-7-12, task 0 0 [8657]: mask 0xf0000000f set
cpu-bind=MASK - compute-7-12, task 1 1 [8658]: mask 0xf0000000f0000 set
cpu-bind=MASK - compute-7-12, task 2 2 [8659]: mask 0xf0000000f0 set
cpu-bind=MASK - compute-7-12, task 3 3 [8660]: mask 0xf0000000f00000 set
cpu-bind=MASK - compute-7-17, task 4 0 [139467]: mask 0xf0000000f set
cpu-bind=MASK - compute-7-17, task 5 1 [139468]: mask 0xf0000000f0000 set
cpu-bind=MASK - compute-7-17, task 6 2 [139469]: mask 0xf0000000f0 set
cpu-bind=MASK - compute-7-17, task 7 3 [139470]: mask 0xf0000000f00000 set
thanks,
Noam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20220321/ce7ffba9/attachment.htm>
More information about the slurm-users
mailing list