As long as the partition does not force exclusive job, people can request in slurm the exact amount of cores and memory they need, and slurm will "automagically" create the cgroup to isolate the job and happily schedule additional ones (in their own cgroup).
We run in that mode, and perhaps you are too already, just make sure to request only some cores and some memory rather than everything

On Fri, May 8, 2026 at 11:03 AM Ron Gould via slurm-users <slurm-users@lists.schedmd.com> wrote:
I have a user asking about a SLURM config setting that would allow a job to not tie up a node's unused resources, thereby allowing another job to run concurrently.

User:
On our cluster, our Fluent GPU jobs each use only 1 of the 2 GPUs on GPUServer[1,2]. Currently the node is fully allocated to whichever job lands on it first, leaving the second GPU, 38 CPU cores, and 270+ GB of RAM idle.

Has anyone dealt with this? What options would facilitate this? Are there any gotchas or pitfalls?

--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com