No, as I replied to a previous poster, when you put "--exclusive" in the sbatch command, that is over-ridden by the partition's Oversubscribe setting.

I ended up just creating a "reservedq" and a "reservedsharedq".

It would be really great to have an equivalent to "--mem 0" (or "--exclusive") that actually worked to request all the resources in a node.

thanks,
dustin





On Tue, Mar 31, 2026 at 6:05 PM Brian Andrus via slurm-users <slurm-users@lists.schedmd.com> wrote:

It sounds like you really want/need oversubscribe to be off.

That being said, perhaps you could do a combo of exclusive and mem=0. The mem=0 should still allocate all the memory (thus the entire node) and the exclusive will allow the job to access all the cpus/cores/gpus.

Brian Andrus

On 3/30/2026 8:46 AM, Dustin Lang via slurm-users wrote:
Hi,

No, unfortunately -- "the partition's OverSubscribe option takes precedence over the job's option"

Thanks for the suggestion, though!
-dustin



On Mon, Mar 30, 2026 at 10:33 AM Guillaume COCHARD <guillaume.cochard@cc.in2p3.fr> wrote:
Hi,

Could --exclusive ( https://slurm.schedmd.com/sbatch.html#OPT_exclusive ) do the trick?

Guillaume


De: "Dustin Lang via slurm-users" <slurm-users@lists.schedmd.com>
À: "Slurm User Community List" <slurm-users@lists.schedmd.com>
Envoyé: Lundi 30 Mars 2026 16:12:11
Objet: [slurm-users] Shared queue: how to request full node with all resources?

Hi,

With a partition with "OverSubscribe=FORCE" set, is there a way to request all the node resources?  I see "--mem=0" does that for memory.  But I do not see an option to request all the CPUs and GRES/TRES such as GPUs.  I tried "--nodes=1 --ntasks=1 --cpus-per-task=0", but "--cpus-per-task=0" does not do the same thing as "--mem=0".

In other words, is it possible to have a queue where nodes can be shared between jobs, but have a simple way for an sbatch script to request the full node with all its memory, cpus, and other resources?  We have a heterogeneous cluster, so telling users (or jupyterhub scripts) to list exactly the resources they want doesn't really work.

thanks,
dustin




--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com


    

--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-leave@lists.schedmd.com