[slurm-users] Weirdness with partitions
diego.zuccato at unibo.it
Fri Sep 22 04:23:23 UTC 2023
Thanks. It seems EnforcePartLimits=ANY is what I need:
If set to "ANY" a job must satisfy any of the requested partitions to be
Probably it got changed by who reinstalled the cluster and I didn't
And Slurm was doing what it's been told to do. As usual :)
Il 21/09/2023 20:41, Feng Zhang ha scritto:
> As I read again on the pasted slurm.conf info, it includes
> "AllowAccounts, AllowGroups,", so it seems slurm actually takes this
> into account. So I think it should work...
> On Thu, Sep 21, 2023 at 2:33 PM Feng Zhang <prod.feng at gmail.com> wrote:
>> As I said I am not sure, but it depends on the algorithm and the code
>> structure of the slurm(no chance to dig into...). My imagination
>> is(for the way slurm works...):
>> Check limits on b1, ok,b2: ok: b3,ok; then b4, nook...(or any order by slurm)
>> If it works with the EnforcePartLimits=ANY or NO, yeah it's a surprise...
>> (This use case might not be included in the original design of slurm, I guess)
>> "NOTE: The partition limits being considered are its configured
>> MaxMemPerCPU, MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes,
>> AllowAccounts, AllowGroups, AllowQOS, and QOS usage threshold."
>> On Thu, Sep 21, 2023 at 11:48 AM Bernstein, Noam CIV USN NRL (6393)
>> Washington DC (USA) <noam.bernstein at nrl.navy.mil> wrote:
>>> On Sep 21, 2023, at 11:37 AM, Feng Zhang <prod.feng at gmail.com> wrote:
>>> Set slurm.conf parameter: EnforcePartLimits=ANY or NO may help this, not sure.
>>> Hmm, interesting, but it looks like this is just a check at submission time. The slurm.conf web page doesn't indicate that it affects the actual queuing decision, just whether or not a job that will never run (at all, or just on some of the listed partitions) can be submitted. If it does help then I think that the slurm.conf description is misleading.
DIFA - Dip. di Fisica e Astronomia
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
More information about the slurm-users