<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Also I recommend setting:</p>
<dl compact="compact">
<dt><b>CoreSpecCount</b></dt>
<dd>Number of cores reserved for system use.
These cores will not be available for allocation to user jobs.
Depending upon the <b>TaskPluginParam</b> option of <b>SlurmdOffSpec</b>,
Slurm daemons (i.e. slurmd and slurmstepd) may either be
confined to these
resources (the default) or prevented from using these resources.
Isolation of the Slurm daemons from user jobs may improve
application performance.
If this option and <b>CpuSpecList</b> are both designated for a
node, an error is generated. For information on the algorithm
used by Slurm
to select the cores refer to the core specialization
documentation
( <a href="https://slurm.schedmd.com/core_spec.html"
class="moz-txt-link-freetext">https://slurm.schedmd.com/core_spec.html</a>
).
</dd>
</dl>
<p>and</p>
<dl compact="compact">
<dt><b>MemSpecLimit</b></dt>
<dd>Amount of memory, in megabytes, reserved for system use and
not available
for user allocations.
If the task/cgroup plugin is configured and that plugin
constrains memory
allocations (i.e. <b>TaskPlugin=task/cgroup</b> in slurm.conf,
plus
<b>ConstrainRAMSpace=yes</b> in cgroup.conf), then Slurm compute
node daemons
(slurmd plus slurmstepd) will be allocated the specified memory
limit. Note that
having the Memory set in <b>SelectTypeParameters</b> as any of
the options that
has it as a consumable resource is needed for this option to
work.
The daemons will not be killed if they exhaust the memory
allocation
(ie. the Out-Of-Memory Killer is disabled for the daemon's
memory cgroup).
If the task/cgroup plugin is not configured, the specified
memory will only be
unavailable for user allocations.
</dd>
</dl>
<p>These will restrict specific memory and cores for system use.
This is probably the best way to go rather than spoofing your
config.</p>
<p>-Paul Edmon-<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 1/7/2022 2:36 AM, Rémi Palancher
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:WFsyn5mnBDA1QoA1zW0LO8amVoL_aOB_Qy7wsAj-tdh--z1HVwH-F0dDOgXR-2tuQLtTxQwQrZ19IljhVrGEWpiu9zwWpShfEyTMO7oZhg4=@rackslab.io">
<pre class="moz-quote-pre" wrap="">Le jeudi 6 janvier 2022 à 22:39, David Henkemeyer <a class="moz-txt-link-rfc2396E" href="mailto:david.henkemeyer@gmail.com"><david.henkemeyer@gmail.com></a> a écrit :
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">All,
When my team used PBS, we had several nodes that had a TON of CPUs, so many, in fact, that we ended up setting np to a smaller value, in order to not starve the system of memory.
What is the best way to do this with Slurm? I tried modifying # of CPUs in the slurm.conf file, but I noticed that Slurm enforces that "CPUs" is equal to Boards * SocketsPerBoard * CoresPerSocket * ThreadsPerCore. This left me with having to "fool" Slurm into thinking there were either fewer ThreadsPerCore, fewer CoresPerSocket, or fewer SocketsPerBoard. This is a less than ideal solution, it seems to me. At least, it left me feeling like there has to be a better way.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
I'm not sure you can lie to Slurm about the real number of CPUs on the nodes.
If you want to prevent Slurm from allocating more than n CPUs below the total number of CPUs of these nodes, I guess one solution is to use MaxCPUsPerNode=n at the partition level.
You can also mask "system" CPUs with CpuSpecList at node level.
The later is better if you need fine grain control over the exact list of reserved CPUs regarding NUMA topology or whatever.
--
Rémi Palancher
Rackslab: Open Source Solutions for HPC Operations
<a class="moz-txt-link-freetext" href="https://rackslab.io">https://rackslab.io</a>
</pre>
</blockquote>
</body>
</html>