[slurm-users] 2 topics: segregating patron accounting, and FIFO in a multifactor setup
Williams, Jenny Avis
jennyw at email.unc.edu
Wed Sep 2 14:40:14 UTC 2020
Hi all -
There are some cases where are researchers wish for behaviors different than our main clusters configuration for their set of machines.
For the first request, where our cluster is set to use PriorityType=priority/multifactor , some wish to have a set of resources that could behave in a "FIFO" way, with relatively few and short jobs submitted + running.
One way I can think of to create that behavior is to have federated clusters and set the PriorityType="priority/basic" - are there more subtle ways to set up a partition within the original cluster that would behave as a FIFO subset of resources in a multifactor environment?
Another requested behavior in the target environment is to have the time used in various owned patron node sets, (e.g partitions , clusters ) by a given user not count in their fairshare accounting as it relates to main general shared partitions.
I do know that setting a distinct account so that the user/account combo is different for a different partition is one way to do this; since fairshare accumulates per slurm account/user tuple; I have tested a combination of account/user/partition to see if that fairshare would be distinct, but it apparently is not, which is a shame. The context here is patron partitions as compared to the general set of shared partitions. Patrons are understandably requesting that they not be penalized in the general pool for the time they use in partitions of nodes they bought. If we have a federated cluster, would user/account/clusterA have separate and distinct sshare values from the same user and account combo in clusterB ? or is the fairshare data merged as an aggregate of the clusters similar to jobID's are?
Is there another way to segregate patrons accounting on their own hosts from the time in the shared partitions?
Thanks for pondering this with me -
UNC Chapel Hill
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the slurm-users