[slurm-users] Increasing /dev/shm max size?
Diego Zuccato
diego.zuccato at unibo.it
Thu Oct 22 10:56:37 UTC 2020
Hello all.
I've been asked to increase /dev/shm max size to 95% of the available
memory for DART (DASH RunTime).
In an article [*], I read: "It is thus imperative that HPC system
administrators are aware of the caveats of shared memory and provide
sufficient resources for both /tmp and /dev/shm filesystems, ideally
allowing users to allocate (nearly) all memory available on the node."
But docs for tmpfs [**] report "If you oversize your tmpfs instances the
machine will deadlock since the OOM handler will not be able to free
that memory.".
Seems some big clusters (MPCDF and LRZ in Germany, and CNAF centers in
Italy) should have already changed it.
So I have two questions:
1) Is it actually safe to do such change ( => 95% is not 'oversizing') ?
2) Is the shared memory accounted as belonging to the process and
enforced accordingly by cgroups?
The second question is important because if the shared memory is not
accounted to the process allocating it, then the nodes cannot be shared
between jobs: one job could use up more memory than requested w/o Slurm
knowledge...
Tks.
[*] Joseph Schuchart, Roger Kowalewski, and Karl Fuerlinger. 2018.
Recent Experiences in Using MPI-3 RMA in the DASH PGAS Run-
time. In HPC Asia 2018 WS: Workshops of HPC Asia 2018, January 31,
2018, Chiyoda, Tokyo, Japan. ACM, New York, NY, USA, 10 pages.
https://doi.org/10.1145/3176364.3176367
[**] https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
More information about the slurm-users
mailing list