[slurm-users] Larger jobs tend to get starved out on our cluster
Skouson, Gary
gbs35 at psu.edu
Fri Jan 11 09:53:55 MST 2019
You should be able to turn on some backfill debug info from slurmctl, You can have slurm output the backfill info. Take a look at DebugFlags settings using Backfill and BackfillMap.
Your bf_window is set to 3600 or 2.5 days, if the start time of the large job is out further than that, it won't get any nodes reserved.
You may also want to take a look at the bf_window_linear parameter. By default the backfill window search starts at 30 seconds and doubles at each iteration. Thus jobs that will need to wait a couple of days to gather the required resources will have a resolution in the backfill reservation that's more than a day wide. Even if nodes will be available 2 days from now, the "reservation" may be out 3 days, allowing 2-day jobs to sneak in before the large job. The result is that small jobs that last 1-2 days can delay the start of a large job for weeks.
You can turn on bf_window_linear and it'll keep that from happening. Unfortunately, it means that there are more backfill iterations required to search out multiple days into the future. If you have relatively few jobs, that may not matter. If you have lots of jobs, it'll slow things down a bit. You'll have to do some testing to see if that'll work for you.
-----
Gary Skouson
From: slurm-users <slurm-users-bounces at lists.schedmd.com> On Behalf Of Baker D.J.
Sent: Wednesday, January 9, 2019 11:40 AM
To: slurm-users at lists.schedmd.com
Subject: [slurm-users] Larger jobs tend to get starved out on our cluster
Hello,
A colleague intimated that he thought that larger jobs were tending to get starved out on our slurm cluster. It's not a busy time at the moment so it's difficult to test this properly. Back in November it was not completely unusual for a larger job to have to wait up to a week to start.
I've extracted the key scheduling configuration out of the slurm.conf and I would appreciate your comments, please. Even at the busiest of times we notice many single compute jobs executing on the cluster -- starting either via the scheduler or by backfill.
Looking at the scheduling configuration do you think that I'm favouring small jobs too much? That is, for example, should I increase the PriorityWeightJobSize to encourage larger jobs to run?
I was very keen not to starve out small/medium jobs, however perhaps there is too much emphasis on small/medium jobs in our setup.
My colleague is from a Moab background, and in that respect he was surprised not to see nodes being reserved for jobs, but it could be that Slurm works in a different way to try to make efficient use of the cluster by backfilling more aggressively than Moab. Certainly we see a great deal of activity from backfill.
In this respect does anyone understand the mechanism used to reserve nodes/resources for jobs in slurm or potentially where to look for that type of information.
Best regards,
David
SchedulerType=sched/backfill
SchedulerParameters=bf_window=3600,bf_resolution=180,bf_max_job_user=4
SelectType=select/cons_res
SelectTypeParameters=CR_Core
FastSchedule=1
PriorityFavorSmall=NO
PriorityFlags=DEPTH_OBLIVIOUS,SMALL_RELATIVE_TO_TIME,FAIR_TREE
PriorityType=priority/multifactor
PriorityDecayHalfLife=14-0
PriorityWeightFairshare=1000000
PriorityWeightAge=100000
PriorityWeightPartition=0
PriorityWeightJobSize=100000
PriorityWeightQOS=10000
PriorityMaxAge=7-0
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190111/7bf6cd08/attachment-0001.html>
More information about the slurm-users
mailing list