[slurm-users] Limit number of specific concurrent jobs per node

Andreas Hilboll hilboll+slurm at uni-bremen.de
Mon May 7 10:57:32 MDT 2018

Dear SLURM experts,

we have a cluster of 56 nodes with 28 cores each.  Is it possible 
limit the number of jobs of a certain name which concurrently run 
one node, without blocking the node for other jobs?

For example, when I do

   for filename in runtimes/*/jobscript.sh; do
     sbatch -J iojob -n 1 $filename

How can I assure that only one of these jobs runs per node?  The 
are very lightweight computationally and only use 1 core each, but
since they are rather heavy on the I/O side, I'd like to ensure 
when a job runs, it doesn't have to share the available I/O 
with other jobs.  (This would actually work since usually our 
jobs are not I/O intensive.)

>From reading the manpage, I couldn't figure out how to do this.

Sunny greetings,

More information about the slurm-users mailing list