[slurm-users] Increasing job priority based on resources requested.

Prentice Bisbal pbisbal at pppl.gov
Mon Apr 22 15:02:14 UTC 2019


That's essentially what I'm doing now, and I'm trying to get away from 
that approach. That is what I am doing now, and because of all the 
heterogeneity in my environment, the list of conditionals is a mile 
long, making every little tweak of my environment quite cumbersome, and 
increases the risk of a human error (typo, logic error), breaking things.

--
Prnetice


On 4/21/19 3:19 AM, Pawel R. Dziekonski wrote:
> Hi,
>
> you can always come up with some kind of submit "filter" that would
> assign constrains to jobs based on requested memory. In this way you
> can force smaller memory jobs to go only to low memory nodes and keep
> large memory nodes free from trash jobs.
>
> The disadvantage is that large mem nodes would wait idle if only low
> mem jobs are in the queue.
>
> cheers,
> P
>
>
>
>
>
>
>
>>> ----- Original Message -----
>>> From: "Prentice Bisbal" <pbisbal at pppl.gov>
>>> To: slurm-users at lists.schedmd.com
>>> Sent: Friday, April 19, 2019 11:27:08 AM
>>> Subject: Re: [slurm-users] Increasing job priority based on resources requested.
>>>
>>> Ryan,
>>>
>>> I certainly understand your point of view, but yes, this is definitely
>>> what I want. We only have a few large memory nodes, so we want jobs that
>>> request a lot of memory to have higher priority so they get assigned to
>>> those large memory nodes ahead of lower-memory jobs which could run
>>> anywhere else. But we don't want those nodes to sit idle if there's jobs
>>> in the queue that need that much memory. Similar idea for IB - nodes
>>> that need IB should get priority over nodes that don't
>>>
>>> Ideally, I wouldn't have such a heterogeneous environment, and then this
>>> wouldn't be needed at all.
>>>
>>> I agree this opens another avenue for unscrupulous users to game the
>>> system, but that (in theory) can be policed by looked at memory
>>> requested vs. memory used in the accounting data to identify any abusers
>>> and then give them a stern talking to.
>>>
>>> Prentice
>>>
>>>
>>> On 4/18/19 5:27 PM, Ryan Novosielski wrote:
>>>> This is not an official answer really, but I’ve always just considered this to be the way that the scheduler works. It wants to get work completed, so it will have a bias toward doing what is possible vs. not (can’t use 239GB of RAM on a 128GB node). And really, is a higher priority what you want? I’m not so sure. How soon will someone figure out that they might get a higher priority based on requesting some feature they don’t need?
>>>>
>>>> --
>>>> ____
>>>> || \\UTGERS,  	 |---------------------------*O*---------------------------
>>>> ||_// the State	 |         Ryan Novosielski - novosirj at rutgers.edu
>>>> || \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
>>>> ||  \\    of NJ	 | Office of Advanced Research Computing - MSB C630, Newark
>>>>          `'
>>>>
>>>>> On Apr 18, 2019, at 5:20 PM, Prentice Bisbal <pbisbal at pppl.gov> wrote:
>>>>>
>>>>> Slurm-users,
>>>>>
>>>>> Is there away to increase a jobs priority based on the resources or constraints it has requested?
>>>>>
>>>>> For example, we have a very heterogeneous cluster here: Some nodes only have 1 Gb Ethernet, some have 10 Gb Ethernet, and others have DDR IB. In addition, we have some large memory nodes with RAM amounts ranging from 128 GB up to 512 GB. To allow a user to request IB, I have implemented that as a feature in the node definition so users can request that as a constraint.
>>>>>
>>>>> I would like to make it that if a job request IB, it's priority will go up, or if it requests a lot of memory (specifically memory-per-cpu), it's priority will go up proportionately to the amount of memory requested. Is this possible? If so, how?
>>>>>
>>>>> I have tried going through the documentation, and googling, but 'priority' is used to discuss job priority so much, I couldn't find any search results relevant to this.
>>>>>
>>>>> -- 
>>>>> Prentice
>>>>>
>>>>>
>>



More information about the slurm-users mailing list