[slurm-users] Longer queuing times for larger jobs

Renfro, Michael Renfro at tntech.edu
Fri Jan 31 17:23:05 UTC 2020

I missed reading what size your cluster was at first, but found it on a second read. Our cluster and typical maximum job size scales about the same way, though (our users’ typical job size is anywhere from a few cores up to 10% of our core count).

There are several recommendations to separate your priority weights by an order of magnitude or so. Our weights are dominated by fairshare, and we effectively ignore all other factors.

We also put TRES limits on by default, so that users can’t queue-stuff beyond a certain limit (any jobs totaling under around 1 cluster-day can be in a running or queued state, and anything past that is ignored until their running jobs burn off some of their time). This allows other users’ jobs to have a chance to run if resources are available, even if they were submitted well after the heavy users’ blocked jobs.

We also make extensive use of the backfill scheduler to run small, short jobs earlier than their queue time might allow, if and only if they don’t delay other jobs. If a particularly large job is about to run, we can see the nodes gradually empty out, which opens up lots of capacity for very short jobs.

Overall, our average wait times since September 2017 haven’t exceeded 90 hours for any job size, and I’m pretty sure a *lot* of that wait is due to a few heavy users submitting large numbers of jobs far beyond the TRES limit. Even our jobs of 5-10% cluster size have average start times of 60 hours or less (and we've managed under 48 hours for those size jobs for all but 2 months of that period), but those larger jobs tend to be run by our lighter users, and they get a major improvement to their queue time due to being far below their fairshare target.

We’ve been running at >50% capacity since May 2018, and >60% capacity since December 2018, and >80% capacity since February 2019. So our wait times aren’t due to having a ton of spare capacity for extended periods of time.

Not sure how much of that will help immediately, but it may give you some ideas.

> On Jan 31, 2020, at 10:14 AM, David Baker <D.J.Baker at soton.ac.uk> wrote:
> External Email Warning
> This email originated from outside the university. Please use caution when opening attachments, clicking links, or responding to requests.
> Hello,
> Thank you for your reply. in answer to Mike's questions...
> Our serial partition nodes are partially shared by the high memory partition. That is, the partitions overlap partially -- shared nodes move one way or another depending upon demand. Jobs requesting up to and including 20 cores are routed to the serial queue. The serial nodes are shared resources. In other words, jobs from different users can share the nodes. The maximum time for serial jobs is 60 hours. 
> Overtime there hasn't been any particular change in the time that users are requesting. Likewise I'm convinced that the overall job size spread is the same over time. What has changed is the increase in the number of smaller jobs. That is, one node jobs that are exclusive (can't be routed to the serial queue) or that require more then 20 cores, and also jobs requesting up to 10/15 nodes (let's say). The user base has increased dramatically over the last 6 months or so. 
> This over population is leading to the delay in scheduling the larger jobs. Given the size of the cluster we may need to make decisions regarding which types of jobs we allow to "dominate" the system. The larger jobs at the expense of the small fry for example, however that is a difficult decision that means that someone has got to wait longer for results..
> Best regards,
> David
> From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Renfro, Michael <Renfro at tntech.edu>
> Sent: 31 January 2020 13:27
> To: Slurm User Community List <slurm-users at lists.schedmd.com>
> Subject: Re: [slurm-users] Longer queuing times for larger jobs
> Greetings, fellow general university resource administrator.
> Couple things come to mind from my experience:
> 1) does your serial partition share nodes with the other non-serial partitions?
> 2) what’s your maximum job time allowed, for serial (if the previous answer was “yes”) and non-serial partitions? Are your users submitting particularly longer jobs compared to earlier?
> 3) are you using the backfill scheduler at all?
> --
> Mike Renfro, PhD  / HPC Systems Administrator, Information Technology Services
> 931 372-3601      / Tennessee Tech University
>> On Jan 31, 2020, at 6:23 AM, David Baker <D.J.Baker at soton.ac.uk> wrote:
>> Hello,
>> Our SLURM cluster is relatively small. We have 350 standard compute nodes each with 40 cores. The largest job that users  can run on the partition is one requesting 32 nodes. Our cluster is a general university research resource and so there are many different sizes of jobs ranging from single core jobs, that get routed to a serial partition via the job-submit.lua, through to jobs requesting 32 nodes. When we first started the service, 32 node jobs were typically taking in the region of 2 days to schedule -- recently queuing times have started to get out of hand. Our setup is essentially...
>> PriorityFavorSmall=NO
>> FairShareDampeningFactor=5
>> PriorityType=priority/multifactor
>> PriorityDecayHalfLife=7-0
>> PriorityWeightAge=400000
>> PriorityWeightPartition=1000
>> PriorityWeightJobSize=500000
>> PriorityWeightQOS=1000000
>> PriorityMaxAge=7-0
>> To try to reduce the queuing times for our bigger jobs should we potentially increase the PriorityWeightJobSize factor in the first instance to bump up the priority of such jobs? Or should we potentially define a set of QOSs which we assign to jobs in our job_submit.lua depending on the size of the job. In other words, let's say there islarge QOS that give the largest jobs a higher priority, and also limits how many of those jobs that a single user can submit?
>> Your advice would be appreciated, please. At the moment these large jobs are not accruing a sufficiently high priority to rise above the other jobs in the cluster.
>> Best regards,
>> David 

More information about the slurm-users mailing list