[slurm-users] Limit number of specific concurrent jobs per node

Andreas Hilboll hilboll+slurm at uni-bremen.de
Mon May 7 10:57:32 MDT 2018


Dear SLURM experts,

we have a cluster of 56 nodes with 28 cores each.  Is it possible 
to
limit the number of jobs of a certain name which concurrently run 
on
one node, without blocking the node for other jobs?

For example, when I do

   for filename in runtimes/*/jobscript.sh; do
     sbatch -J iojob -n 1 $filename
   done

How can I assure that only one of these jobs runs per node?  The 
jobs
are very lightweight computationally and only use 1 core each, but
since they are rather heavy on the I/O side, I'd like to ensure 
that
when a job runs, it doesn't have to share the available I/O 
bandwidth
with other jobs.  (This would actually work since usually our 
other
jobs are not I/O intensive.)

>From reading the manpage, I couldn't figure out how to do this.


Sunny greetings,
 Andreas



More information about the slurm-users mailing list