[slurm-users] Question about having 2 partitions that are mutually exclusive, but have unexpected interactions

David Henkemeyer david.henkemeyer at gmail.com
Thu May 12 18:31:37 UTC 2022


Thanks Brian.  We have it set to 100k, which has really improved our
performance on the A partition.  We queue up 50k+ jobs nightly, and see
really good node utilization, so deep jobs are being considered.

Could be that we have the scheduler too busy doing certain things, that it
takes a while for it to figure out that the B jobs, despite being lower
priority, can be run on partition B, with nothing higher priority targeted
for that partition.

My wish here would be to be able to tell the controller to spawn a separate
thread, and have one thread focus only on the B partition, while the other
focuses on the rest.  Or something similar.

David

On Thu, May 12, 2022 at 9:13 AM Brian Andrus <toomuchit at gmail.com> wrote:

> I suspect you have too low of a setting for "MaxJobCount"
>
> *MaxJobCount*
>               The maximum number of jobs SLURM can have in its active database
>               at one time. Set the values  of  *MaxJobCount*  and  *MinJobAge*  to
>               insure the slurmctld daemon does not exhaust its memory or other
>               resources. Once  this  limit  is  reached,  requests  to  submit
>               additional  jobs will fail. The default value is 5000 jobs. This
>               value may not be reset via "scontrol reconfig".  It  only  takes
>               effect  upon  restart  of  the slurmctld daemon.  May not exceed
>               65533.
>
>
> so if you already have (by default) 5000 jobs being considered, the
> remaining aren't even looked at.
>
> Brian Andrus
> On 5/12/2022 7:34 AM, David Henkemeyer wrote:
>
> Question for the braintrust:
>
> I have 3 partitions:
>
>    - Partition A_highpri: 80 nodes
>    - Partition A_lowpri: same 80 nodes
>    - Partition B_lowpri: 10 different nodes
>
>
> There is no overlap between A and B partitions.
>
> Here is what I'm observing.  If I fill the queue with ~20-30k jobs for
> partition A_highpri, and several thousand to partition A_lowpri, then, a
> bit later, submit jobs to partition B_lowpri, I am observing that the
> Partition B jobs *are queued and not running right away, and are given a
> pending reason of "Priority"*, which doesn't seem right to me.  Yes,
> there are higher priority jobs pending in the queue (the jobs bound for
> A_hi), but there aren't any higher priority jobs pending *for the same
> partition* as the Partition B jobs, so theoretically, these partition B
> jobs should not be held up.  Eventually, the scheduler gets around to
> scheduling them, but it seems to take a while for the scheduler (which is
> probably pretty busy dealing with job starts, job stops, etc) to figure
> this out.
>
> If I schedule fewer jobs to the A partitions ( ~3k jobs ), then the
> scheduler schedules the PartitionB jobs much faster, as expected.  As I
> increase from 3k, then partition B jobs get held up longer and longer.
>
> I can raise the priority on partition B, and that does solve the problem,
> but I don't want those jobs to impact the partition A_lowpri jobs.  In
> fact, *I don't want any cross-partition influence*.
>
> I'm hoping there is a slurm parameter I can tweak to make slurm recognize
> that these partition B jobs shouldn't ever have a pending state of
> "priority".  Or to treat these as 2 separate queues.  Or something like
> that.  Spinning up a 2nd slurm controller is not ideal for us (uless there
> is a lightweight method to do it).
>
> Thanks
> David
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20220512/02f36c3c/attachment.htm>


More information about the slurm-users mailing list