[slurm-users] Advice on setting up fairshare

David Baker D.J.Baker at soton.ac.uk
Thu Jun 6 16:05:14 UTC 2019


Hello,


Could someone please give me some advice on setting up the fairshare in a cluster. I don't think the present setup is wildly incorrect, however either my understanding of the setup is wrong or something is reconfigured.


When we set a new user up on the cluster and they haven't used any resources am I correct in thinking that their fairshare (as reported by sshare -a) should be 1.0? Looking at a new user,  I see...


[root at blue52 slurm]# sshare -a | grep rk1n15
  soton                  rk1n15          1    0.003135           0      0.000000   0.822165


This is a very simple setup. We have a number of groups (all under root)...


soton -- general public

hydrology - specific groups that have purchased their own nodes.

relgroup

worldpop


What I do for each of these groups, when a new user is added, is increment the number of shares per the relevant group using, for example...


sacctmgr modify account soton set fairshare=X


Where X is the number of users in the group (soton in this case).


The sshare -a command would give me a global overview...


             Account       User  RawShares  NormShares    RawUsage  EffectvUsage  FairShare
-------------------- ---------- ---------- ----------- ----------- ------------- ----------
root                                          0.000000 15431286261      1.000000
 root                      root          1    0.002755          40      0.000000   1.000000
 hydrology                               3    0.008264     1357382      0.000088
  hydrology              da1g18          1    0.333333           0      0.000000   0.876289
....


Does that all make sense or am I missing something? I am, by the way, using the line

PriorityFlags=ACCRUE_ALWAYS,FAIR_TREE in my slurm.conf.


Best regards,

David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190606/f63088c7/attachment.html>


More information about the slurm-users mailing list