[slurm-users] possible to set memory slack space before killing jobs?

Eli V eliventer at gmail.com
Thu Dec 6 07:00:49 MST 2018


On Wed, Dec 5, 2018 at 5:04 PM Bjørn-Helge Mevik <b.h.mevik at usit.uio.no> wrote:
>
> I don't think Slurm has any facility for soft memory limits.
>
> But you could emulate it by simply configure the nodes in slurm.conf
> with, e.g., 15% higher RealMemory value than what is actually available
> on the node.  Then a node with 256 GiB RAM would be able to run 9 jobs,
> each asking for 32 GiB RAM.
>
> (You wouldn't get the effect that a job would be allowed to exceed its
> soft limit for a set amount of time before getting killed, though.)
>
> --
> Regards,
> Bjørn-Helge Mevik, dr. scient,
> Department for Research Computing, University of Oslo

I don't think this is possible currently. From my experience slurm
will auto drain a node if it's actual physical memory is less then
what's defined for it in the slurm.conf. I guess we could add the
slack space/overcommit here as well though. I'll have to think about
it some more. It's not immediately obvious to me that it wouldn't
work, but seems intuitively odd to me for some reason.



More information about the slurm-users mailing list