<div dir="ltr">Hi Daniel<div><br></div><div>I have tried this configuration but it has not given me results.<br><br>Is there any other option to be able to do this, or should something else be configured to use the weight parameter?<br></div><div><br></div><div>Thanks in advance.</div><div><br></div><div>Regards,</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El lun., 5 ago. 2019 a las 5:35, Daniel Letai (<<a href="mailto:dani@letai.org.il">dani@letai.org.il</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div style="direction:ltr" bgcolor="#FFFFFF">
<p>Hi.</p>
<p><br>
</p>
<div>On 8/3/19 12:37 AM, Sistemas NLHPC
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi all,<br>
<br>
Currently we have two types of nodes, one with 192GB and another
with 768GB of RAM, it is required that in nodes of 768 GB it is
not allowed to execute tasks with less than 192GB, to avoid
underutilization of resources.<br>
<br>
This, because we have nodes that can fulfill the condition of
executing tasks with 192GB or less.<br>
<br>
Is it possible to use some slurm configuration to solve this
problem?<br>
</div>
</blockquote>
<p>Easiest would be to use features/constraints. In slurm.conf add</p>
<p>NodeName=DEFAULT RealMemory=196608 Features=192GB Weight=1<br>
</p>
<p>NodeName=... (list all nodes with 192GB)</p>
<p>NodeName=DEFAULT RealMemory=786432 Features=768GB Weight=2</p>
<p>NodeName=... (list all nodes with 768GB)</p>
<p><br>
</p>
<p>And to run jobs only on node with 192GB in sbatch do</p>
<p>sbatch -C 192GB ...</p>
<p><br>
</p>
<p>To run jobs on all nodes, simply don't add the constraint to the
sbatch line, and due to lower weight jobs should prefer to start
on the 192GB nodes.<br>
</p>
<blockquote type="cite">
<div dir="ltr"><br>
PD: All users can submit jobs on all nodes<br>
<br>
Thanks in advance <br>
<br>
Regards.<br>
<br>
</div>
</blockquote>
</div>
</blockquote></div>