[slurm-users] Priority access for a group of users

Paul Edmon pedmon at cfa.harvard.edu
Fri Feb 15 15:05:11 UTC 2019


Yup, PriorityTier is what we use to do exactly that here.  That said 
unless you turn on preemption jobs may still pend if there is no space.  
We run with REQUEUE on which has worked well.


-Paul Edmon-


On 2/15/19 7:19 AM, Marcus Wagner wrote:
> Hi David,
>
> as far as I know, you can use the PriorityTier (partition parameter) 
> to achieve this. According to the manpages (if I remember right) jobs 
> from higher priority tier partitions have precedence over jobs from 
> lower priority tier partitions, without taking the normal fairshare 
> priority into consideration.
>
> Best
> Marcus
>
> On 2/15/19 10:07 AM, David Baker wrote:
>>
>> Hello.
>>
>>
>> We have a small set of compute nodes owned by a group. The group has 
>> agreed that the rest of the HPC community can use these nodes 
>> providing that they (the owners) can always have priority access to 
>> the nodes. The four nodes are well provisioned (1 TByte memory each 
>> plus 2 GRID K2 graphics cards) and so there is no need to worry about 
>> preemption. In fact I'm happy for the nodes to be used as well as 
>> possible by all users. It's just that jobs from the owners must take 
>> priority if resources are scarce.
>>
>>
>> What is the best way to achieve the above in slurm? I'm planning to 
>> place the nodes in their own partition. The node owners will have 
>> priority access to the nodes in that partition, but will have no 
>> advantage when submitting jobs to the public resources. Does anyone 
>> please have any ideas how to deal with this?
>>
>>
>> Best regards,
>>
>> David
>>
>>
>
> -- 
> Marcus Wagner, Dipl.-Inf.
>
> IT Center
> Abteilung: Systeme und Betrieb
> RWTH Aachen University
> Seffenter Weg 23
> 52074 Aachen
> Tel: +49 241 80-24383
> Fax: +49 241 80-624383
> wagner at itc.rwth-aachen.de
> www.itc.rwth-aachen.de
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190215/96ba081f/attachment.html>


More information about the slurm-users mailing list