[slurm-users] Priority access for a group of users

Henkel henkel at uni-mainz.de
Mon Feb 18 09:52:05 UTC 2019


Hi Marcus,

sure, using Prioritytier is fine. And my point wasn't so much about
preepmtion but exactely about to use just one partition and no
preemption instead of two partitions, which is what David was asking
for, isn't? But actuallym, I forgot that you can do it in one partition
too by using preempt/qos. Though we haven't use that.

Best,

Andreas

On 2/18/19 9:07 AM, Marcus Wagner wrote:
> Hi Andreas,
>
>
> doesn't it suffice to use priority tier partitions? You don't need to
> use preemption at all, do you?
>
>
> Best
> Marcus
>
> On 2/18/19 8:27 AM, Henkel, Andreas wrote:
>> Hi David,
>>
>> I think there is another option if you don’t want to use preemption.
>> If the max runlimit is small (several hours for example) working
>> without preemption may be acceptable. 
>> Assign a qos with a priority boost to the owners of the node. Then
>> whenever they submit jobs to the partition they get to the top off
>> the queue.
>> This works only if there is one dedicated partition for those nodes
>> but accessible for all users of course. 
>>
>> Best,
>> Andreas 
>>
>> Am 15.02.2019 um 18:08 schrieb david baker <djbaker12 at gmail.com
>> <mailto:djbaker12 at gmail.com>>:
>>
>>> Hi Paul, Marcus,
>>>
>>> Thank you for your replies. Using partition priority all makes
>>> sense. I was thinking of doing something similar with a set of nodes
>>> purchased by another group. That is, having a private high priority
>>> partition and a lower priority "scavenger" partition for the public.
>>> In this case scavenger jobs will get killed when preempted. 
>>>
>>> In the present case , I did wonder if it would be possible to do
>>> something with just a single partition -- hence my question.Your
>>> replies have convinced me that two partitions will work -- with
>>> preemption leading to re-queued jobs. 
>>>
>>> Best regards,
>>> David 
>>>
>>> On Fri, Feb 15, 2019 at 3:09 PM Paul Edmon <pedmon at cfa.harvard.edu
>>> <mailto:pedmon at cfa.harvard.edu>> wrote:
>>>
>>>     Yup, PriorityTier is what we use to do exactly that here.  That
>>>     said unless you turn on preemption jobs may still pend if there
>>>     is no space.  We run with REQUEUE on which has worked well.
>>>
>>>
>>>     -Paul Edmon-
>>>
>>>
>>>     On 2/15/19 7:19 AM, Marcus Wagner wrote:
>>>>     Hi David,
>>>>
>>>>     as far as I know, you can use the PriorityTier (partition
>>>>     parameter) to achieve this. According to the manpages (if I
>>>>     remember right) jobs from higher priority tier partitions have
>>>>     precedence over jobs from lower priority tier partitions,
>>>>     without taking the normal fairshare priority into consideration.
>>>>
>>>>     Best
>>>>     Marcus
>>>>
>>>>     On 2/15/19 10:07 AM, David Baker wrote:
>>>>>
>>>>>     Hello.
>>>>>
>>>>>
>>>>>     We have a small set of compute nodes owned by a group. The
>>>>>     group has agreed that the rest of the HPC community can use
>>>>>     these nodes providing that they (the owners) can always have
>>>>>     priority access to the nodes. The four nodes are well
>>>>>     provisioned (1 TByte memory each plus 2 GRID K2 graphics
>>>>>     cards) and so there is no need to worry about preemption. In
>>>>>     fact I'm happy for the nodes to be used as well as possible by
>>>>>     all users. It's just that jobs from the owners must take
>>>>>     priority if resources are scarce.  
>>>>>
>>>>>
>>>>>     What is the best way to achieve the above in slurm? I'm
>>>>>     planning to place the nodes in their own partition. The node
>>>>>     owners will have priority access to the nodes in that
>>>>>     partition, but will have no advantage when submitting jobs to
>>>>>     the public resources. Does anyone please have any ideas how to
>>>>>     deal with this?
>>>>>
>>>>>
>>>>>     Best regards,
>>>>>
>>>>>     David
>>>>>
>>>>>
>>>>
>>>>     -- 
>>>>     Marcus Wagner, Dipl.-Inf.
>>>>
>>>>     IT Center
>>>>     Abteilung: Systeme und Betrieb
>>>>     RWTH Aachen University
>>>>     Seffenter Weg 23
>>>>     52074 Aachen
>>>>     Tel: +49 241 80-24383
>>>>     Fax: +49 241 80-624383
>>>>     wagner at itc.rwth-aachen.de <mailto:wagner at itc.rwth-aachen.de>
>>>>     www.itc.rwth-aachen.de <http://www.itc.rwth-aachen.de>
>>>
>
> -- 
> Marcus Wagner, Dipl.-Inf.
>
> IT Center
> Abteilung: Systeme und Betrieb
> RWTH Aachen University
> Seffenter Weg 23
> 52074 Aachen
> Tel: +49 241 80-24383
> Fax: +49 241 80-624383
> wagner at itc.rwth-aachen.de
> www.itc.rwth-aachen.de

-- 
Dr. Andreas Henkel
Operativer Leiter HPC
Zentrum für Datenverarbeitung
Johannes Gutenberg Universität
Anselm-Franz-von-Bentzelweg 12
55099 Mainz
Telefon: +49 6131 39 26434
OpenPGP Fingerprint: FEC6 287B EFF3
7998 A141 03BA E2A9 089F 2D8E F37E

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190218/1cef2e76/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0xE2A9089F2D8EF37E.asc
Type: application/pgp-keys
Size: 3143 bytes
Desc: not available
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190218/1cef2e76/attachment.key>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190218/1cef2e76/attachment.sig>


More information about the slurm-users mailing list