[slurm-users] Longer queuing times for larger jobs

David Baker D.J.Baker at soton.ac.uk
Fri Jan 31 16:14:23 UTC 2020


Hello,

Thank you for your reply. in answer to Mike's questions...

Our serial partition nodes are partially shared by the high memory partition. That is, the partitions overlap partially -- shared nodes move one way or another depending upon demand. Jobs requesting up to and including 20 cores are routed to the serial queue. The serial nodes are shared resources. In other words, jobs from different users can share the nodes. The maximum time for serial jobs is 60 hours.

Overtime there hasn't been any particular change in the time that users are requesting. Likewise I'm convinced that the overall job size spread is the same over time. What has changed is the increase in the number of smaller jobs. That is, one node jobs that are exclusive (can't be routed to the serial queue) or that require more then 20 cores, and also jobs requesting up to 10/15 nodes (let's say). The user base has increased dramatically over the last 6 months or so.

This over population is leading to the delay in scheduling the larger jobs. Given the size of the cluster we may need to make decisions regarding which types of jobs we allow to "dominate" the system. The larger jobs at the expense of the small fry for example, however that is a difficult decision that means that someone has got to wait longer for results..

Best regards,
David
________________________________
From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of Renfro, Michael <Renfro at tntech.edu>
Sent: 31 January 2020 13:27
To: Slurm User Community List <slurm-users at lists.schedmd.com>
Subject: Re: [slurm-users] Longer queuing times for larger jobs

Greetings, fellow general university resource administrator.

Couple things come to mind from my experience:

1) does your serial partition share nodes with the other non-serial partitions?

2) what’s your maximum job time allowed, for serial (if the previous answer was “yes”) and non-serial partitions? Are your users submitting particularly longer jobs compared to earlier?

3) are you using the backfill scheduler at all?

--
Mike Renfro, PhD  / HPC Systems Administrator, Information Technology Services
931 372-3601<tel:931%20372-3601>      / Tennessee Tech University

On Jan 31, 2020, at 6:23 AM, David Baker <D.J.Baker at soton.ac.uk> wrote:

Hello,

Our SLURM cluster is relatively small. We have 350 standard compute nodes each with 40 cores. The largest job that users  can run on the partition is one requesting 32 nodes. Our cluster is a general university research resource and so there are many different sizes of jobs ranging from single core jobs, that get routed to a serial partition via the job-submit.lua, through to jobs requesting 32 nodes. When we first started the service, 32 node jobs were typically taking in the region of 2 days to schedule -- recently queuing times have started to get out of hand. Our setup is essentially...

PriorityFavorSmall=NO
FairShareDampeningFactor=5
PriorityFlags=ACCRUE_ALWAYS,FAIR_TREE
PriorityType=priority/multifactor
PriorityDecayHalfLife=7-0

PriorityWeightAge=400000
PriorityWeightPartition=1000
PriorityWeightJobSize=500000
PriorityWeightQOS=1000000
PriorityMaxAge=7-0

To try to reduce the queuing times for our bigger jobs should we potentially increase the PriorityWeightJobSize factor in the first instance to bump up the priority of such jobs? Or should we potentially define a set of QOSs which we assign to jobs in our job_submit.lua depending on the size of the job. In other words, let's say there is large QOS that give the largest jobs a higher priority, and also limits how many of those jobs that a single user can submit?

Your advice would be appreciated, please. At the moment these large jobs are not accruing a sufficiently high priority to rise above the other jobs in the cluster.

Best regards,
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200131/894d1dc4/attachment.htm>


More information about the slurm-users mailing list