[slurm-users] multifactor priority calculation

Williams, Gareth (IM&T, Black Mountain) Gareth.Williams at csiro.au
Mon Jun 13 23:09:23 UTC 2022


Perhaps run 'sprio -l' and 'sprio -lw' to get more insight into the current priority calculation for pending jobs.

Gareth

-----Original Message-----
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of z148x at arcor.de
Sent: Tuesday, 14 June 2022 6:09 AM
To: slurm-users at lists.schedmd.com
Subject: Re: [slurm-users] multifactor priority calculation


Hello Lyn,
only the priority settings I wrote as example are in the slurm config.

Maybe I found the missing peace.
It looks like the priority (for some jobs?) in the slurm (19.05.5) database is not updated. I retrieve these values via slurmdb over pyslurm.

This would be a problem for my purposes, the priority values from squeue seem to fit.

Is this a bug?

Regards,
Mike

On 13.06.22 20:50, Lyn Gerner wrote:
> Mike, it feels like there may be other PriorityWeight terms that are 
> non-zero in your config. QoS or partition-related, perhaps?
> 
> Regards,
> Lyn
> 
> On Mon, Jun 13, 2022 at 5:55 AM <z148x at arcor.de> wrote:
> 
>>
>> Dear all,
>>
>> I noticed different priority calculations by running a pipe, the 
>> settings are for example:
>>
>> PriorityType=priority/multifactor
>> PriorityWeightJobSize=100000
>> AccountingStorageTRES=cpu,mem,gres/gpu
>> PriorityWeightTRES=cpu=1000,mem=2000,gres/gpu=3000
>>
>> No age factor or something else from the plugin.
>>
>>
>> The calculated priority for memory and cpus:
>> mem 1024, cpus 1, priority 25932
>> mem 123904, cpus 14, priority 38499
>> mem 251904, cpus 28, priority 20652
>> mem 251904, cpus 28, priority 14739
>>
>>
>> gres or gpu was not available on the jobs/instances.
>>
>>
>> Someone know why the priority changed with the same cpu and mem input?
>>
>> The priority with these settings should be descending, highest 
>> priority for mem 251904 with cpus 28 and lowest priority for mem 1024 with cpus 1.
>>
>>
>> Many thanks,
>>
>> Mike
>>
>>
> 



More information about the slurm-users mailing list