[slurm-users] Large job starvation on cloud cluster

Thomas M. Payerle payerle at umd.edu
Wed Feb 27 20:52:21 UTC 2019

The "JobId=2210784 delayed for accounting policy is likely the key as it
indicates the job is currently unable to run, so the lower priority smaller
job bumps ahead of it.
You have not provided enough information (cluster configuration, job
information, etc) to diagnose what accounting policy is being violated.
Like you, I suspect that this is happening due to power management and
powered-down nodes (I am not experienced with sending jobs to the cloud)
--- what is the policy for starting the powered down nodes?  I can also see
issues due to the delay in starting the powered down nodes; the scheduler
starts looking at 2210784, are not enough idle, running nodes to launch it,
maybe it tells some nodes to spin up, but by time they spin up it already
assigned the previously up and idle nodes to the smaller job.

On Wed, Feb 27, 2019 at 3:33 PM Michael Gutteridge <
michael.gutteridge at gmail.com> wrote:

> I've run into a problem with a cluster we've got in a cloud provider-
> hoping someone might have some advice.
> The problem is that I've got a circumstance where large jobs _never_
> start... or more correctly, that large-er jobs don't start when there are
> many smaller jobs in the partition.  In this cluster, accounts are limited
> to 300 cores.  One user has submitted a couple thousand jobs that each use
> 6 cores.  These queue up, start nodes, and eventually all 300 cores in the
> limit are busy and the remaining jobs are held with "AssocGrpCpuLimit".
> All as expected.
> Then another user submits a job requesting 16 cores.  This one, too, gets
> held with the same reason.  However, that larger job never starts even if
> it has the highest priority of jobs in this account (I've set it manually
> and by using nice).
> What I see in the sched.log is:
> sched: [2019-02-25T16:00:14.940] Running job scheduler
> sched: [2019-02-25T16:00:14.941] JobId=2210784 delayed for accounting
> policy
> sched: [2019-02-25T16:00:14.942] JobId=2203130 initiated
> sched: [2019-02-25T16:00:14.942] Allocate JobId=2203130 NodeList=node1
> #CPUs=6 Partition=largenode
> In this case, 2210784 is the job requesting 16 cores and 2203130 is one of
> the six core jobs.  This seems to happen with either the backfill or
> builtin scheduler.  I suspect what's happening is that when one of the
> smaller jobs completes, the scheduler first looks at the higher-priority
> large job, determines that it cannot run because of the constraint, looks
> at the next job in the list, determines that it can run without exceeding
> the limit, and then starts that job.  In this way, the larger job isn't
> started until all of these smaller jobs complete.
> I thought that switching to the builtin scheduler would fix this, but as
> slurm.conf(5) indicates:
> > An exception is made for jobs that can not run due
> > to partition constraints (e.g. the time limit) or
> > down/drained nodes.  In that case, lower priority
> > jobs can be initiated and not impact the higher
> > priority job.
> I suspect one of these exceptions is being triggered- the limit is in the
> job's association, so I don't think it's a partition constraint.  I don't
> have this problem with the on-premises cluster, so I suspect it's something
> to do with power management and the state of powered-down nodes.
> I've sort-of worked around this by setting a per-user limit lower than the
> per-account limit, but that doesn't address any situation where a single
> user submits large and small jobs and does lead to some other problems in
> other groups, so it's not a long-term solution.
> Thanks for having a look
>  - Michael

Tom Payerle
DIT-ACIGS/Mid-Atlantic Crossroads        payerle at umd.edu
5825 University Research Park               (301) 405-6135
University of Maryland
College Park, MD 20740-3831
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190227/016afd7e/attachment.html>

More information about the slurm-users mailing list