[slurm-users] Requirement of one GPU job should run in GPU nodes in a cluster

Sudeep Narayan Banerjee snbanerjee at iitgn.ac.in
Fri Dec 17 07:33:32 UTC 2021


Hello All: Can we please restrict one GPU job on one GPU node?

That is,
a) when we submit a GPU job on an empty node (say gpu2) requesting 16 cores
as that gives the best performance in the GPU and it gives best performance.
b) Then another user flooded the CPU cores on gpu2 sharing the GPU
resources. The net results is a GPU job got hit by 40% performance in the
next run

Can we make some changes in the slurm configuration such that when a GPU
job is submitted in a GPU node, no other job can enter that GPU node?

I am attaching my slurm.config file along with this email. Any help will be
deeply appreciated!

I apologize if this is a repeated email.


Thanks & Regards,
Sudeep Narayan Banerjee
System Analyst | Scientist B
Information System and Technology Facility
Academic Block 5, Room 110A
Indian Institute of Technology Gandhinagar
Palaj, Gujarat 382055, INDIA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20211217/47b9ff27/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: slurm.conf
Type: application/octet-stream
Size: 2648 bytes
Desc: not available
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20211217/47b9ff27/attachment-0001.obj>


More information about the slurm-users mailing list