[slurm-users] Swap Configuraton for compute nodes

Jeffrey Frey frey at udel.edu
Wed Aug 17 11:59:39 UTC 2022


We've actually been patching the Slurm cgroup plug-in to enable configurable per-node and per-partition swap settings.  E.g. on node X with 64 cores a job gets (N_core,job / N_core,tot)*8 GiB added to the physical RAM limit, where 8 GiB is some fraction of the total swap available.  It's still not technically a swap limit per se, because it entails the sum of swap + physical RAM usage, but it has kept our nodes from getting starved out thanks to heavy swapping and can scale with job size, etc.

/*!
  @signature Jeffrey Frey, Ph.D
  @email frey at udel.edu
  @source iPhone
*/

> On Aug 17, 2022, at 04:13, Hermann Schwärzler <hermann.schwaerzler at uibk.ac.at> wrote:
> 
> Hi Eg.
> 
> if you are using cgroups (as you do if I read your other post correctly) these two lines in your cgroup.conf should do the trick:
> 
> ConstrainSwapSpace=yes
> AllowedSwapSpace=0
> 
> Regards,
> Hermann
> 
> PS: BTW we are planning to *not* use this setting as right now we are looking into allowing jobs to swap/page when they have exhausted their requested memory.
> Until now our impression is that this has an impact on the respective job only and not on the whole compute node. But maybe we are wrong...
> 
> 
>> On 8/11/22 10:51 PM, Eg. Bo. wrote:
>> Hello,
>> could anybody share a decent configuration which prohibits swap usage for any job? Right now, I'd think my configuration (see my previous post in regards to RealMemory configuration) could end up in swapping which is not intended at all for compute nodes.
>> Thanks&Best
>> Eg.
> 




More information about the slurm-users mailing list