<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body dir="auto">
We’ve run a similar setup since I moved to Slurm 3 years ago, with no issues. Could you share partition definitions from your slurm.conf?
<div><br>
</div>
<div>When you see a bunch of jobs pending, which ones have a reason of “Resources”? Those should be the next ones to run, and ones with a reason of “Priority” are waiting for higher priority jobs to start (including the ones marked “Resources”). The only time
I’ve seen nodes sit idle is when there’s an MPI job pending with “Resources”, and if any smaller jobs started, it would delay that job’s start.<br>
<br>
<div dir="ltr">
<div><span style="background-color: rgba(255, 255, 255, 0);">--</span></div>
<span style="background-color: rgba(255, 255, 255, 0);">Mike Renfro, PhD / HPC Systems Administrator, Information Technology Services<br>
<a href="tel:931%20372-3601" dir="ltr" x-apple-data-detectors="true" x-apple-data-detectors-type="telephone" x-apple-data-detectors-result="0/1" style="-webkit-text-decoration-color: rgba(0, 0, 0, 0.258824);">931 372-3601</a> / Tennessee Tech University</span></div>
<div dir="ltr"><br>
<blockquote type="cite">On Aug 14, 2020, at 4:20 AM, Erik Eisold <eisold@pks.mpg.de> wrote:<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr"><span></span><br>
<span>Our node topology is a bit special where almost all our nodes are in one</span><br>
<span>common partition a subset of all those nodes are then in another</span><br>
<span>partition and this repeats once more the only difference between the</span><br>
<span>partitions except the nodes in it are the maximum run time. The reason I</span><br>
<span>originally set it up this way was to ensure that users with shorter jobs</span><br>
<span>had a quicker response time and the whole cluster wouldn't be clogged up</span><br>
<span>with long running jobs for days on end this and I was new to the whole</span><br>
<span>cluster setup and Slurm itself. I have attached a rough visualization of</span><br>
<span>this setup to this mail. There are 2 more totally separate partitions</span><br>
<span>that are not in this image.</span><br>
<span></span><br>
<span>My idea for a solution would be to move all nodes to one common</span><br>
<span>partition and using partition QOS to implement time and resource</span><br>
<span>restrictions because I think the scheduler is not really meant to handle</span><br>
<span>the type of setup we choose in the beginning.</span><br>
</div>
</blockquote>
</div>
</body>
</html>