[slurm-users] QOS MaxJobsPU limit is not working

Lyn Gerner schedulerqueen at gmail.com
Thu Jul 26 15:04:26 MDT 2018


HI,

Have you enforced other limits successfully? What is the value of
AccountingStorageEnforce?

Regards,
Lyn

On Thu, Jul 26, 2018 at 1:45 PM, Siddharth Dalmia <dalmia.sid at gmail.com>
wrote:

>
> Hi all, We wanted to try make 2 different qos (priority and normal). For
> priority QOS - 1) Each user is only allowed 1 JOB. 2) Has higher priority
> than normal, with the ability to preempt. For priority QOS - 1) No
> restriction on the number of jobs. Overall Jobs run with
> `priority/multifactor`. I am not able to restrict number of jobs that a
> user can run with a particular qos even after setting the MaxJobsPU to 1
> for priority. srun --qos=prirority --pty bash assigns me how many ever jobs
> I want. I am sorry if this is redundant, but I am having a lot of trouble
> in making this work and any help is appreciated. below are some of the
> information about my conf and slurmdbd which might be useful. Thanks in
> advance! --------------- My slurm.conf looks like - NodeName=islpc[30-39]
> CPUs=2 State=UNKNOWN Gres=gpu:1080Ti:2 PartitionName=standard
> Nodes=islpc[30-39] Default=YES MaxTime=2-0 State=UP GresTypes=gpu
> PriorityType=priority/multifactor PriorityWeightAge=1000
> PriorityWeightFairshare=10000 PriorityWeightJobSize=1000
> PriorityWeightPartition=1000 PriorityWeightQOS=100000 ---------------
> sacctmgr show qos format=Name,Priority,MaxJobsPU Name Priority MaxJobsPU
> ---------- ---------- --------- normal 0 priority 10 1 ---------------
> sacctmgr show assoc format=Cluster,Account,User,QOS Cluster Account User
> QOS ---------- ---------- ---------- -------------------- islpc-clu+ root
> priority,normal islpc-clu+ root root priority,normal islpc-clu+ islpc
> priority,normal islpc-clu+ islpc sdalmia priority,normal
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180726/3d0c80cf/attachment.html>


More information about the slurm-users mailing list