<div dir="ltr"><div>The "JobId=2210784 delayed for accounting policy is likely the key as it indicates the job is currently unable to run, so the lower priority smaller job bumps ahead of it.</div><div>You have not provided enough information (cluster configuration, job information, etc) to diagnose what accounting policy is being violated. Like you, I suspect that this is happening due to power management and powered-down nodes (I am not experienced with sending jobs to the cloud) --- what is the policy for starting the powered down nodes? I can also see issues due to the delay in starting the powered down nodes; the scheduler starts looking at 2210784, are not enough idle, running nodes to launch it, maybe it tells some nodes to spin up, but by time they spin up it already assigned the previously up and idle nodes to the smaller job.<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 27, 2019 at 3:33 PM Michael Gutteridge <<a href="mailto:michael.gutteridge@gmail.com">michael.gutteridge@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div class="gmail_default" style="font-family:monospace">I've run into a problem with a cluster we've got in a cloud provider- hoping someone might have some advice.</div><div class="gmail_default" style="font-family:monospace"><br></div><div class="gmail_default" style="font-family:monospace">The problem is that I've got a circumstance where large jobs _never_ start... or more correctly, that large-er jobs don't start when there are many smaller jobs in the partition. In this cluster, accounts are limited to 300 cores. One user has submitted a couple thousand jobs that each use 6 cores. These queue up, start nodes, and eventually all 300 cores in the limit are busy and the remaining jobs are held with "AssocGrpCpuLimit". All as expected.</div><div class="gmail_default" style="font-family:monospace"><br></div><div class="gmail_default" style="font-family:monospace">Then another user submits a job requesting 16 cores. This one, too, gets held with the same reason. However, that larger job never starts even if it has the highest priority of jobs in this account (I've set it manually and by using nice).</div><div class="gmail_default" style="font-family:monospace"><br></div><div class="gmail_default" style="font-family:monospace">What I see in the sched.log is:</div><div class="gmail_default" style="font-family:monospace"><br></div><div class="gmail_default"><div class="gmail_default" style="font-family:monospace">sched: [2019-02-25T16:00:14.940] Running job scheduler</div><div class="gmail_default" style="font-family:monospace">sched: [2019-02-25T16:00:14.941] JobId=2210784 delayed for accounting policy</div><div class="gmail_default" style="font-family:monospace">sched: [2019-02-25T16:00:14.942] JobId=2203130 initiated</div><div class="gmail_default" style="font-family:monospace">sched: [2019-02-25T16:00:14.942] Allocate JobId=2203130 NodeList=node1 #CPUs=6 Partition=largenode</div><div style="font-family:monospace"><br></div><div style="font-family:monospace">In this case, 2210784 is the job requesting 16 cores and 2203130 is one of the six core jobs. This seems to happen with either the backfill or builtin scheduler. I suspect what's happening is that when one of the smaller jobs completes, the scheduler first looks at the higher-priority large job, determines that it cannot run because of the constraint, looks at the next job in the list, determines that it can run without exceeding the limit, and then starts that job. In this way, the larger job isn't started until all of these smaller jobs complete.</div><div style="font-family:monospace"><br></div><div style="font-family:monospace">I thought that switching to the builtin scheduler would fix this, but as slurm.conf(5) indicates:</div><div style="font-family:monospace"><br></div><div><div><font face="monospace">> An </font><span style="font-family:monospace">exception is made for jobs that can not run due </span></div><div><span style="font-family:monospace">> to partition </span><span style="font-family:monospace">constraints (e.g. the time limit) or </span></div><div><span style="font-family:monospace">> down/drained nodes. In </span><span style="font-family:monospace">that case, lower priority </span></div><div><span style="font-family:monospace">> jobs can be initiated and not </span><span style="font-family:monospace">impact the higher </span></div><div><span style="font-family:monospace">> priority job.</span></div></div><div style="font-family:monospace"><br></div><div style="font-family:monospace">I suspect one of these exceptions is being triggered- the limit is in the job's association, so I don't think it's a partition constraint. I don't have this problem with the on-premises cluster, so I suspect it's something to do with power management and the state of powered-down nodes.</div><div style="font-family:monospace"><br></div><div style="font-family:monospace">I've sort-of worked around this by setting a per-user limit lower than the per-account limit, but that doesn't address any situation where a single user submits large and small jobs and does lead to some other problems in other groups, so it's not a long-term solution.</div><div style="font-family:monospace"><br></div><div style="font-family:monospace">Thanks for having a look</div><div style="font-family:monospace"><br></div><div style="font-family:monospace"> - Michael</div><div style="font-family:monospace"><br></div></div><input name="virtru-metadata" type="hidden" value="{"email-policy":{"state":"closed","expirationUnit":"days","disableCopyPaste":false,"disablePrint":false,"disableForwarding":false,"enableNoauth":false,"expires":false,"isManaged":false},"attachments":{},"compose-id":"5","compose-window":{"secure":false}}"></div></div></div></div>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Tom Payerle <br>DIT-ACIGS/Mid-Atlantic Crossroads <a href="mailto:payerle@umd.edu" target="_blank">payerle@umd.edu</a><br></div><div>5825 University Research Park (301) 405-6135<br></div><div dir="ltr">University of Maryland<br>College Park, MD 20740-3831<br></div></div></div></div></div></div>