[slurm-users] scheduling issue

Renfro, Michael Renfro at tntech.edu
Fri Aug 14 12:18:00 UTC 2020


We’ve run a similar setup since I moved to Slurm 3 years ago, with no issues. Could you share partition definitions from your slurm.conf?

When you see a bunch of jobs pending, which ones have a reason of “Resources”? Those should be the next ones to run, and ones with a reason of “Priority” are waiting for higher priority jobs to start (including the ones marked “Resources”). The only time I’ve seen nodes sit idle is when there’s an MPI job pending with “Resources”, and if any smaller jobs started, it would delay that job’s start.

--
Mike Renfro, PhD  / HPC Systems Administrator, Information Technology Services
931 372-3601<tel:931%20372-3601>      / Tennessee Tech University

On Aug 14, 2020, at 4:20 AM, Erik Eisold <eisold at pks.mpg.de> wrote:

Our node topology is a bit special where almost all our nodes are in one
common partition a subset of all those nodes are then in another
partition and this repeats once more the only difference between the
partitions except the nodes in it are the maximum run time. The reason I
originally set it up this way was to ensure that users with shorter jobs
had a quicker response time and the whole cluster wouldn't be clogged up
with long running jobs for days on end this and I was new to the whole
cluster setup and Slurm itself. I have attached a rough visualization of
this setup to this mail. There are 2 more totally separate partitions
that are not in this image.

My idea for a solution would be to move all nodes to one common
partition and using partition QOS to implement time and resource
restrictions because I think the scheduler is not really meant to handle
the type of setup we choose in the beginning.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200814/ff61a87d/attachment.htm>


More information about the slurm-users mailing list