Look into the documentation on a QOS on the sacctmgr page. A QOS can be defined via sacctmgr, and that QOS can be attached to the partition to allow for more restrictions than just the partition definition allows.
One of the settings for a QOS is "MAXTRESPerJob", so setting that to "cpu=8", and setting that QOS in the partition definition SHOULD limit all jobs on the partition to a max of 8 CPUs.
Rob
________________________________ From: Davide DelVento via slurm-users slurm-users@lists.schedmd.com Sent: Wednesday, February 26, 2025 2:15 PM To: Herbert Fruchtl herbert.fruchtl@st-andrews.ac.uk Cc: slurm-users@lists.schedmd.com slurm-users@lists.schedmd.com Subject: [slurm-users] Re: Limit CPUs per job (but not per user, partition or node)
Hi Herbert, I believe the limit is per node (not per partition) whereas you want it per job. In other words, your users will be able to run jobs on other nodes.
There is no MaxCPUsPerJob option in the partition definition, but I believe you can make that restriction in other ways (at worst with a job_submit.lua but I think there would be a simpler way).
Sorry not an answer, but hopefully a little nudge toward the solution Davide
On Wed, Feb 26, 2025 at 11:37 AM Herbert Fruchtl via slurm-users <slurm-users@lists.schedmd.commailto:slurm-users@lists.schedmd.com> wrote: We have a cluster with multi-core nodes (168) that can be shared by multiple jobs at the same time. How do I configure a partition such that it only accepts jobs requesting up to (say) 8 cores, but will run multiple jobs at the same time? The following is apparently not working:
PartitionName=debug Nodes=node01 MaxTime=02:00:00 DefMemPerCPU=1000 MaxCPUsPerNode=8 Default=NO
It allows one job using 8 cores, but a second one will not start because the limit is apparently for the partition as a whole.
Thanks in advance,
Herbert -- Herbert Fruchtl (he/him) Senior Scientific Computing Officer / HPC Administrator School of Chemistry, IT Services University of St Andrews -- The University of St Andrews is a charity registered in Scotland: No SC013532
-- slurm-users mailing list -- slurm-users@lists.schedmd.commailto:slurm-users@lists.schedmd.com To unsubscribe send an email to slurm-users-leave@lists.schedmd.commailto:slurm-users-leave@lists.schedmd.com