[slurm-users] Limits to partitions for users groups

Рачко Антон Сергеевич anton at ciam.ru
Thu Feb 6 13:20:53 UTC 2020


Strange. I set "limited" QoS with limit of cpu=560 and apply it to my queue.

[root at head ~]# sacctmgr show qos
\      Name   Priority  GraceTime    Preempt PreemptMode                                    Flags UsageThres UsageFactor       GrpTRES   GrpTRESMins GrpTRESRunMin GrpJobs GrpSubmit     GrpWall       MaxTRES MaxTRESPerNode   MaxTRESMins     MaxWall     MaxTRESPU MaxJobsPU MaxSubmitPU     MaxTRESPA MaxJobsPA MaxSubmitPA       MinTRES
---------- ---------- ---------- ---------- ----------- ---------------------------------------- ---------- ----------- ------------- ------------- ------------- ------- --------- ----------- ------------- -------------- ------------- ----------- ------------- --------- ----------- ------------- --------- ----------- -------------
    normal          0   00:00:00                cluster                                                        1.000000                                                                                                                                                                                                  
    limited         10   00:00:00                cluster                                                        1.000000 cpu=560


For test, I run hostname from user on more than 560 cpu. It's run...

13655          hostname    work    limited       1232  COMPLETED      0:0

[root at head ~]# scontrol show job 13655 | grep cpu
   TRES=cpu=1232,node=22

-----Original Message-----
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Renfro, Michael
Sent: Wednesday, February 05, 2020 5:53 PM
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Limits to partitions for users groups

If you want to rigidly define which 20 nodes are available to the one group of users, you could define a 20-node partition for them, and a 35-node partition for the priority group, and restrict access by Unix group membership:

PartitionName=restricted Nodes=node0[01-20] AllowGroups=ALL PartitionName=priority Nodes=node0[01-35] AllowGroups=prioritygroup

If you don’t care which of the 35 nodes get used by the first group, but want to restrict them to using at most 20 nodes of the 35, you could define a single partition and a QOS for each group:

PartitionName=restricted Nodes=node0[01-35] AllowGroups=ALL QoS=restricted PartitionName=priority Nodes=node0[01-35] AllowGroups=prioritygroup QoS=priority

sacctmgr add qos restricted
sacctmgr modify qos restricted set grptres=cpu=N # where N=20*(cores per node) sacctmgr add qos priority sacctmgr modify qos restricted set grptres=cpu=-1 # might not be strictly required


> On Feb 5, 2020, at 8:07 AM, Рачко Антон Сергеевич <anton at ciam.ru> wrote:
> 
> External Email Warning
> This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.
> I have partition with 35 nodes. Many users use it, but one group of them have more priority than others. I want to set limit of max. 20 nodes for any users and allow use all nodes for users in priority group.
> I can split this partition to 2: 20-node partition for all and 15-node for priority group. Can I do it otherwise (sacctmg, QOS, etc.)?



More information about the slurm-users mailing list