[slurm-users] Limit number of specific concurrent jobs per node

Mahmood Naderan mahmood.nt at gmail.com
Mon May 7 12:40:16 MDT 2018

You may have to read --nodelist in the sbatch manual

On Mon, May 7, 2018, 21:29 Andreas Hilboll <hilboll+slurm at uni-bremen.de>

> Dear SLURM experts,
> we have a cluster of 56 nodes with 28 cores each.  Is it possible
> to
> limit the number of jobs of a certain name which concurrently run
> on
> one node, without blocking the node for other jobs?
> For example, when I do
>    for filename in runtimes/*/jobscript.sh; do
>      sbatch -J iojob -n 1 $filename
>    done
> How can I assure that only one of these jobs runs per node?  The
> jobs
> are very lightweight computationally and only use 1 core each, but
> since they are rather heavy on the I/O side, I'd like to ensure
> that
> when a job runs, it doesn't have to share the available I/O
> bandwidth
> with other jobs.  (This would actually work since usually our
> other
> jobs are not I/O intensive.)
> From reading the manpage, I couldn't figure out how to do this.
> Sunny greetings,
>  Andreas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180507/275018c7/attachment.html>

More information about the slurm-users mailing list