[slurm-users] Cgroups and swap with 18.08.1?
John Hearns
hearnsj at googlemail.com
Fri Oct 19 21:11:51 MDT 2018
After doing some Googling
https://jvns.ca/blog/2017/02/17/mystery-swap/ Swapping is weird and
confusing (Amen to that!)
https://jvns.ca/blog/2016/12/03/how-much-memory-is-my-process-using-/
(interesting article)
>From the Docker documentation, below.
Bill - this is what you are seeing. Twice as much swap as real memory
usage. So that seems to be a kernel default behaviour.
As you say the slurm logs say swap is set to 0 though.
https://docs.docker.com/config/containers/resource_constraints/#--memory-swap-details
f --memory-swap is unset, and --memory is set, the container can use twice
as much swap as the --memory setting, if the host container has swap memory
configured. For instance, if --memory="300m" and --memory-swap is not set,
the container can use 300m of memory and 600m of swap.
On Sat, 20 Oct 2018 at 03:39, Chris Samuel <chris at csamuel.org> wrote:
> On Tuesday, 16 October 2018 2:47:34 PM AEDT Bill Broadley wrote:
>
> > AllowedSwapSpace=0
> >
> > So I expect jobs to not use swap. Turns out if I run a 3GB ram process
> with
> > sbatch --mem=1000 I just get a process that uses 1GB ram and 2GB of swap.
>
> That's intended. The manual page says:
>
> AllowedSwapSpace=<number>
> Constrain the job cgroup swap space to this percentage of
> the
> allocated memory. The default value is 0, which means that
> RAM+Swap will be limited to AllowedRAMSpace.
>
> You probably want this as well:
>
> MemorySwappiness=<number>
> Configure the kernel's priority for swapping out anonymous
> pages
> (such as program data) verses file cache pages for the job
> cgroup. Valid values are between 0 and 100, inclusive. A
> value of 0 prevents the kernel from swapping out program
> data.
>
> cheers!
> Chris
> --
> Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181020/89787038/attachment.html>
More information about the slurm-users
mailing list