[slurm-users] setting default resource limits with sacctmgr dump/load ?

Grigory Shamov Grigory.Shamov at umanitoba.ca
Thu Jan 23 23:44:35 UTC 2020


Hi All,

I have tried to use a script that would manage SURM accounts and users
with sacctmgr dump flat files. I am using SLURM 19.05.4, on CentOS 7.6.

Our accounting scheme is rather flat: there is one level of accounting
groups and users that belong to the groups. It looks like with sacctmgr
dump / load the user Œroot' is created automatically, and its accounting
group Œroot¹ is created implicitly. The dump does not explicitly list
account Œroot¹; the top of the dump looks like this.

Cluster - Œmy cluster':FairShare=1:QOS='normal':GrpTRES=cpu=400
Parent - 'root'
User - 'root':AdminLevel='Administrator':DefaultAccount='root':FairShare=1
Account - Œaaa-qqq':FairShare=1:GrpTRES=cpu=666
Account - Œbbb-rrr':FairShare=1

(more accounts to follow)

(then users to follow)

It mostly works, but I ran into an issue while I have tried to specify
default limits. I have  used the first line of the dump, ³Cluster - Œmy
cluster':FairShare=1:QOS='normal':GrpTRES=cpu=400²

The documentation says "Anything included on this line will be the
defaults for all associations on this cluster.  These options are as
followsŠ² . I read for all it as ³for each and every association, that
does not have anything else explicitly specified². I read this, that the
account Œbbb-rrr¹ should get cpu=400 by default, while aaa-qqq gets
cpu=666 because it is explicitly set.

However, I have found the at least GrpTRES limit gets set to the implicit
Œroot¹ association as well. And because the Œroot¹ association is the
parent of each and every accounting group, the supposedly default
per-association limits of GrpTRES=cpu become total limits per entire
cluster. SO that aaa-qqq and bbb-rrr ¹s users cannot exceed cpu=400 .
Which is way more limiting than I¹d expect.

Is it a bug or a feature? Is there a way to distinguish cluster-wide-total
limits from default per-ag limits with sacctmgr dump flat file syntax?

Thanks!  

--
Grigory Shamov
University of Manitoba




More information about the slurm-users mailing list