[slurm-users] Slurm configuration
loris.bennett at fu-berlin.de
Mon Aug 5 06:00:06 UTC 2019
Hi NLHPC employee,
Sistemas NLHPC <sistemas at nlhpc.cl> writes:
> Hi all,
> Currently we have two types of nodes, one with 192GB and another with
> 768GB of RAM, it is required that in nodes of 768 GB it is not allowed
> to execute tasks with less than 192GB, to avoid underutilization of
> This, because we have nodes that can fulfill the condition of
> executing tasks with 192GB or less.
> Is it possible to use some slurm configuration to solve this problem?
> PD: All users can submit jobs on all nodes
Bear in mind that, you could have a situation in which the nodes for 192
GB or less and the 768 GB nodes are empty. If then jobs are submitted
which require less than 192 GB are submitted, they will have to wait and
you will have underutilisation of your resources.
Rather than excluding the low-memory jobs completely from the
high-memory nodes, you could weight the low-memory nodes such that jobs
preferentially start there. You could have a shorter timelimit for
low-memory jobs on high-memory nodes.
In my experience it is best to encourage and assist users to estimate
their memory requirements as possible. If users are requesting 192 GB
and landing on the low-memory nodes, but only use 96 GB, then you also
have resource underutilisation, and you probably have more low-memory
nodes than high-memory nodes, so that might be a bigger problem.
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin Email loris.bennett at fu-berlin.de
More information about the slurm-users