[slurm-users] Holding back jobs over QOS limit

Jochheim, Florian florian.jochheim at mpibpc.mpg.de
Wed Aug 21 12:13:25 UTC 2019

Hi Folks,


We have a simple small slurm cluster set up to facilitate a fair usage of
the computing resources in our group. Simple in the sense that users only
run exclusive jobs on single nodes so far. For fairness, we have set 





I would however like to change the policy in the following way:


1)      Each user can submit as many jobs as he/she wants

2)      Only two nodes can be used by a single user at any given time, also
enabling mpi jobs on up to two nodes

3)      Each job that a user has in the queue while being over the limit in
2) will be at the very last position in the queue


1)/2) are easy enough, I just get rid of MaxSubmitJobsPerUser and set


I have not come up with a good way to implement 3) though, I would like the
following behavior:


A)      User X submits two jobs (Id 1 and 2) requiring two nodes each, #1
will start, #2 will be held back (QOSMaxNodePerUserLimit)

B)      Assume all nodes are taken now

C)      User Y has no running or queued jobs and submits a job (Id  #3)

D)      Job #1 finishes, freeing up resources


Current behavior: Job #2 starts, as it was submitted earlier

What I want: Job #3 should start first as User Y was not over their QOS
limit while User X was at the time of submission


My thinking: In this way, users could submit as many jobs as they want
(which they would like for convenience reasons) without getting unfair
precedence over others.

We want to distribute our resources as equally as possible.


Is something like this possible to achieve, does anyone have an idea how
this could be done? I am happy to hear your thoughts





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190821/ad26c981/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 6179 bytes
Desc: not available
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20190821/ad26c981/attachment.bin>

More information about the slurm-users mailing list