You probably want to look at this setting: https://slurm.schedmd.com/slurm.conf.html#OPT_CR_Core_Memory Which is what we use in combination with https://slurm.schedmd.com/slurm.conf.html#OPT_select/cons_tres To allow users to specify what fractions of they node they want to use, leaving the rest open for other users. -Paul Edmon- On 5/8/2026 12:21 PM, Ron Gould via slurm-users wrote:
I have a user asking about a SLURM config setting that would allow a job to not tie up a node's unused resources, thereby allowing another job to run concurrently.
User: On our cluster, our Fluent GPU jobs each use only 1 of the 2 GPUs on GPUServer[1,2]. Currently the node is fully allocated to whichever job lands on it first, leaving the second GPU, 38 CPU cores, and 270+ GB of RAM idle.
Has anyone dealt with this? What options would facilitate this? Are there any gotchas or pitfalls?