[slurm-users] Slurm configuration

Sistemas NLHPC sistemas at nlhpc.cl
Tue Oct 29 13:17:42 UTC 2019


Hi Daniel

I have tried this configuration but it has not given me results.

Is there any other option to be able to do this, or should something else
be configured to use the weight parameter?

Thanks in advance.

Regards,

El lun., 5 ago. 2019 a las 5:35, Daniel Letai (<dani at letai.org.il>)
escribió:

> Hi.
>
>
> On 8/3/19 12:37 AM, Sistemas NLHPC wrote:
>
> Hi all,
>
> Currently we have two types of nodes, one with 192GB and another with
> 768GB of RAM, it is required that in nodes of 768 GB it is not allowed to
> execute tasks with less than 192GB, to avoid underutilization of resources.
>
> This, because we have nodes that can fulfill the condition of executing
> tasks with 192GB or less.
>
> Is it possible to use some slurm configuration to solve this problem?
>
> Easiest would be to use features/constraints. In slurm.conf add
>
> NodeName=DEFAULT RealMemory=196608 Features=192GB Weight=1
>
> NodeName=... (list all nodes with 192GB)
>
> NodeName=DEFAULT RealMemory=786432 Features=768GB Weight=2
>
> NodeName=... (list all nodes with 768GB)
>
>
> And to run jobs only on node with 192GB in sbatch do
>
> sbatch -C 192GB ...
>
>
> To run jobs on all nodes, simply don't add the constraint to the sbatch
> line, and due to lower weight jobs should prefer to start on the 192GB
> nodes.
>
>
> PD: All users can submit jobs on all nodes
>
> Thanks in advance
>
> Regards.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20191029/8679c44b/attachment.htm>


More information about the slurm-users mailing list