[slurm-users] How to bind GPUs with CPU cores

William Zhang zhangyuchao1 at hotmail.com
Fri Oct 14 09:41:35 UTC 2022


Dear all,
     Our compute nodes have 128 CPU cores with 8 nvidia GPU cards.
     I set 8 numa node like this .
[root at g0025 ~]# numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
node 0 size: 64130 MB
node 0 free: 62086 MB
node 1 cpus: 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
node 1 size: 64492 MB
node 1 free: 62306 MB
node 2 cpus: 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
node 2 size: 64508 MB
node 2 free: 62487 MB
node 3 cpus: 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
node 3 size: 64496 MB
node 3 free: 62443 MB
node 4 cpus: 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79
node 4 size: 64508 MB
node 4 free: 62283 MB
node 5 cpus: 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 5 size: 64508 MB
node 5 free: 61074 MB
node 6 cpus: 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111
node 6 size: 64508 MB
node 6 free: 62284 MB
node 7 cpus: 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127
node 7 size: 64507 MB
node 7 free: 62354 MB
node distances:
node   0   1   2   3   4   5   6   7
  0:  10  12  12  12  32  32  32  32
  1:  12  10  12  12  32  32  32  32
  2:  12  12  10  12  32  32  32  32
  3:  12  12  12  10  32  32  32  32
  4:  32  32  32  32  10  12  12  12
  5:  32  32  32  32  12  10  12  12
  6:  32  32  32  32  12  12  10  12
  7:  32  32  32  32  12  12  12  10

[root at g0025 ~]# nvidia-smi topo -m
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    CPU Affinity    NUMA Affinity
GPU0     X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     48-63   3
GPU1    SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     32-47   2
GPU2    SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     16-31   1
GPU3    SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     0-15    0
GPU4    SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     112-127 7
GPU5    SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     96-111  6
GPU6    SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     80-95   5
GPU7    SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      64-79   4

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks


Users can submit job one GPU with 6~16 CPU cores .
And I set gres.conf like this .

[root at g0038 ~]# cat /etc/slurm/gres.conf
Name=gpu File=/dev/nvidia0 COREs=0-15
Name=gpu File=/dev/nvidia1 COREs=16-31
Name=gpu File=/dev/nvidia2 COREs=32-47
Name=gpu File=/dev/nvidia3 COREs=48-63
Name=gpu File=/dev/nvidia4 COREs=64-79
Name=gpu File=/dev/nvidia5 COREs=80-95
Name=gpu File=/dev/nvidia6 COREs=96-111
Name=gpu File=/dev/nvidia7 COREs=112-127


How to realize this function .
For example ,
A job requires 6 CPUs with 1 GPU .And it runs on gpu ID 0 , CPU ID 0-5 .
The second job requires 8 CPUs with 1 GPU . If it runs on gpu ID 1 ,we hope the CPU ID is 16-23.
The third job requires 6 CPUs with 1 GPU . If it runs on gpu ID 2 ,we hope the CPU ID is 32-37.
The next job requires 12 CPUs with 2 GPU . If it runs on gpu ID 3-4 ,we hope the CPU ID is 48-53,64-69 .


Can we implement this function ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20221014/b77712b0/attachment-0001.htm>


More information about the slurm-users mailing list