[slurm-users] Disabling SWAP space will it effect SLURM working

Davide DelVento davide.quantum at gmail.com
Mon Dec 11 16:19:20 UTC 2023


A little late here, but yes everything Hans said is correct and if you are
worried about slurm (or other critical system software) getting killed by
OOM, you can workaround it by properly configuring cgroup.

On Wed, Dec 6, 2023 at 2:06 AM Hans van Schoot <vanschoot at scm.com> wrote:

> Hi Joseph,
>
> This might depend on the rest of your configuration, but in general swap
> should not be needed for anything on Linux.
> BUT: you might get OOM killer messages in your system logs, and SLURM
> might fall victim to the OOM killer (OOM = Out Of Memory) if you run
> applications on the compute node that eat up all your RAM.
> Swap does not prevent against this, but makes it less likely to happen.
> I've seen OOM kill slurm daemon processes on compute nodes with swap,
> usually slurm recovers just fine after the application that ate up all the
> RAM ends up getting killed by the OOM killer. My compute nodes are not
> configured to monitor memory usage of jobs. If you have memory configured
> as a managed resource in your SLURM setup, and you leave a bit of headroom
> for the OS itself (e.g. only hand our a maximum of 250GB RAM to jobs on
> your 256GB RAM nodes), you should be fine.
>
> cheers,
> Hans
>
>
> ps. I'm just a happy slurm user/admin, not an expert, so I might be wrong
> about everything :-)
>
>
>
> On 06-12-2023 05:57, John Joseph wrote:
>
> Dear All,
> Good morning
> We have 4 node   [256 GB Ram in each node]  SLURM instance  with which we
> installed and it is working fine.
> We have 2 GB of SWAP space on each node,  for some purpose  to make the
> system in full use want to disable the SWAP memory,
>
> Like to know if I am disabling the SWAP  partition will it efffect SLURM
> functionality .
>
> Advice requested
> Thanks
> Joseph John
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20231211/4caaeb48/attachment.htm>


More information about the slurm-users mailing list