Slurm has support for cgroups which confines the job to only the resources allocated to it.

https://slurm.schedmd.com/cgroups.html

Tim

 

From: Ron Gould via slurm-users <slurm-users@lists.schedmd.com>
Date: Friday, 8 May 2026 at 18:03
To: slurm-users@lists.schedmd.com <slurm-users@lists.schedmd.com>
Subject: [slurm-users] SLURM config option to not tie up a host completely.

I have a user asking about a SLURM config setting that would allow a job to not tie up a node's unused resources, thereby allowing another job to run concurrently.

User:
On our cluster, our Fluent GPU jobs each use only 1 of the 2 GPUs on GPUServer[1,2]. Currently the node is fully allocated to whichever job lands on it first, leaving the second GPU, 38 CPU cores, and 270+ GB of RAM idle.

Has anyone dealt with this? What options would facilitate this? Are there any gotchas or pitfalls?

--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com


AstraZeneca UK Limited is a company incorporated in England and Wales with registered number:03674842 and its registered office at 1 Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge, CB2 0AA.

This e-mail and its attachments are intended for the above named recipient only and may contain confidential and privileged information. If they have come to you in error, you must not copy or show them to anyone; instead, please reply to this e-mail, highlighting the error to the sender and then immediately delete the message. For information about how AstraZeneca UK Limited and its affiliates may process information, personal data and monitor communications, please see our privacy notice at www.astrazeneca.com