[slurm-users] QOS time limit tighter than partition limit

Ross Dickson ross.dickson at ace-net.ca
Fri Dec 17 21:08:02 UTC 2021


Thanks for the suggestions, Samuel.  It turns out the root of the problem
was elsewhere:  Although I had updated slurm.conf with
'AccountingStorageEnforce = associations,limits,qos' and 'scontrol show
config' said the same, I had neglected to restart slurmctld, so it *wasn't*
actually in effect.  If you're listening, SchedMD, that is IMO a bug with
'scontrol show config'.  But also, silly me for not reading the docs and
the log files better.

Cheers all!
Ross



>> On Thu, Dec 16, 2021 at 6:01 PM Ross Dickson <ross.dickson at ace-net.ca>
>> wrote:
>>
>>> It would like to impose a time limit stricter than the partition limit
>>> on a certain subset of users.  I should be able to do this with a QOS, but
>>> I can't get it to work.  ...
>>>
>>

> I've created a QOS 'nonpaying' with MaxWall=1-0:0:0, and set
>>> MaxTime=7-0:0:0 on partition 'general'.  I set the association on  user1 so
>>> that their job will get QOS 'nonpaying', then submit a job with
>>> --time=7-0:0:0, and it runs:
>>>
>>> $ scontrol show partition general | egrep 'QoS|MaxTime'
>>>    AllocNodes=ALL Default=YES QoS=N/A
>>>    MaxNodes=UNLIMITED MaxTime=7-00:00:00 MinNodes=0 LLN=NO
>>> MaxCPUsPerNode=UNLIMITED
>>> $ sacctmgr show qos nonpaying format=name,flags,maxwall
>>>       Name                Flags     MaxWall
>>> ---------- -------------------- -----------
>>>  nonpaying                       1-00:00:00
>>> $ scontrol show job 33 | egrep 'QOS|JobState|TimeLimit'
>>>    Priority=4294901728 Nice=0 Account=acad1 QOS=nonpaying
>>>    JobState=RUNNING Reason=None Dependency=(null)
>>>    RunTime=00:00:40 TimeLimit=7-00:00:00 TimeMin=N/A
>>> $ scontrol show config | grep AccountingStorageEnforce
>>> AccountingStorageEnforce = associations,limits,qos
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20211217/aea9a4ed/attachment-0001.htm>


More information about the slurm-users mailing list