[slurm-users] Weirdness with partitions

Feng Zhang prod.feng at gmail.com
Thu Sep 21 18:41:29 UTC 2023


As I read again on the pasted slurm.conf info, it includes
"AllowAccounts, AllowGroups,", so it seems slurm actually takes this
into account. So  I think it should work...

Best,

Feng

On Thu, Sep 21, 2023 at 2:33 PM Feng Zhang <prod.feng at gmail.com> wrote:
>
> As I said I am not sure, but it depends on the algorithm and the code
> structure of the slurm(no chance to dig into...). My imagination
> is(for the way slurm works...):
>
> Check limits on b1, ok,b2: ok: b3,ok; then b4, nook...(or any order by slurm)
>
> If it works with the EnforcePartLimits=ANY or NO,  yeah it's a surprise...
>
> (This use case might not be included in the original design of slurm, I guess)
>
> "NOTE: The partition limits being considered are its configured
> MaxMemPerCPU, MaxMemPerNode, MinNodes, MaxNodes, MaxTime, AllocNodes,
> AllowAccounts, AllowGroups, AllowQOS, and QOS usage threshold."
>
> Best,
>
> Feng
>
> On Thu, Sep 21, 2023 at 11:48 AM Bernstein, Noam CIV USN NRL (6393)
> Washington DC (USA) <noam.bernstein at nrl.navy.mil> wrote:
> >
> > On Sep 21, 2023, at 11:37 AM, Feng Zhang <prod.feng at gmail.com> wrote:
> >
> > Set slurm.conf parameter: EnforcePartLimits=ANY or NO may help this, not sure.
> >
> >
> > Hmm, interesting, but it looks like this is just a check at submission time. The slurm.conf web page doesn't indicate that it affects the actual queuing decision, just whether or not a job that will never run (at all, or just on some of the listed partitions) can be submitted.  If it does help then I think that the slurm.conf description is misleading.
> >
> > Noam



More information about the slurm-users mailing list