[slurm-users] Can one specify attributes on a GRES resource?
Quirin Lohr
quirin.lohr at in.tum.de
Wed Mar 20 08:06:25 UTC 2019
Hi Will,
I solved this by creating a new GRES:
Some nodes have VRAM:no_consume:12G
Some nodes have VRAM:no_consume:24G
"no_consume" because it would be for the whole node otherwise.
It only works because the nodes only have one type of GPUs each.
It is then requested with --gres=gpu:1,VRAM:16G
Here an extract of my slurm.conf
> NodeName=node7 Gres=gpu:p6000:8,VRAM:no_consume:24G Boards=1 SocketsPerBoard=2 CoresPerSocket=18 ThreadsPerCore=1 RealMemory=257843 Weight=10 Feature=p6000
> NodeName=node6 Gres=gpu:titanxpascal:8,VRAM:no_consume:12G Boards=1 SocketsPerBoard=2 CoresPerSocket=18 ThreadsPerCore=1 RealMemory=257854 Weight=1 Feature=titanxp
The cudacores could be implemented accordingly (which is a nice idea btw.).
Regards
Quirin
Am 15.03.2019 um 17:04 schrieb Will Dennis:
> Hi all,
>
> I currently have features specified on my GPU-equipped nodes as follows:
>
> GPUMODEL_1050TI,GPUCHIP_GP107,GPUARCH_PASCAL,GPUMEM_4GB,GPUCUDACORES_768
>
> or
>
> GPUMODEL_TITANV,GPUCHIP_GV100,GPUARCH_VOLTA,GPUMEM_12GB,GPUCUDACORES_5120,GPUTENSORCORES_640
>
> The “GPUMEM” and “GPU[CUDA|TENSOR]CORES” tags are informative, but not
> useful if one wants to specify GPUs with “more memory than 6GB” or “at
> least 1000 cores”... Is there a way to set attributes on the GRES
> resources so a user may do these sorts of constraints? (Can’t find
> anything on Google)
>
> I run Slurm clusters here using ver’s 16.05.4 and 17.11.7.
>
> Thanks,
>
> Will
>
--
Quirin Lohr
Systemadministration
Technische Universität München
Fakultät für Informatik
Lehrstuhl für Bildverarbeitung und Mustererkennung
Boltzmannstrasse 3
85748 Garching
Tel. +49 89 289 17769
Fax +49 89 289 17757
quirin.lohr at in.tum.de
www.vision.in.tum.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 5565 bytes
Desc: S/MIME Cryptographic Signature
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190320/ab7b685e/attachment.bin>
More information about the slurm-users
mailing list