<div dir="ltr">Thanks Andy,<div><br></div><div>I've been able to confirm that in my case, any jobs that ran for at least 30 minutes (puppet's run interval) would lose their cgroups, and that the time those cgroups disappear corresponds exactly with puppet runs. I am not sure if this is cgroup change to root is what causes the oom event that Slurm detects - I looked through src/plugins/task/cgroup/task_cgroup_memory.c and the memory cgroup documentation and it's not clear to me what would happen if you've created the oom event listener on a specific cgroup and that cgroup disappears. But since I disabled puppet overnight, jobs running longer than 30 minutes are completing, and cgroups are persisting, whereas before that, they were not.</div><div><br></div><div>--nate</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Apr 30, 2018 at 5:47 PM, Andy Georges <span dir="ltr"><<a href="mailto:Andy.Georges@ugent.be" target="_blank">Andy.Georges@ugent.be</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class=""><br>
<br>
> On 30 Apr 2018, at 22:37, Nate Coraor <<a href="mailto:nate@bx.psu.edu">nate@bx.psu.edu</a>> wrote:<br>
> <br>
> Hi Shawn,<br>
> <br>
> I'm wondering if you're still seeing this. I've recently enabled task/cgroup on 17.11.5 running on CentOS 7 and just discovered that jobs are escaping their cgroups. For me this is resulting in a lot of jobs ending in OUT_OF_MEMORY that shouldn't, because it appears slurmd thinks the oom-killer has triggered when it hasn't. I'm not using GRES or devices, only:<br>
<br>
</span>I am not sure that you are making the correct conclusion here.<br>
<br>
There is a known cgroups issue, due to<br>
<br>
<a href="https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt" rel="noreferrer" target="_blank">https://www.kernel.org/doc/<wbr>Documentation/cgroup-v1/<wbr>memory.txt</a><br>
<br>
Relevant part:<br>
<br>
The memory controller has a long history. A request for comments for the memory<br>
controller was posted by Balbir Singh [1]. At the time the RFC was posted<br>
there were several implementations for memory control. The goal of the<br>
RFC was to build consensus and agreement for the minimal features required<br>
for memory control. The first RSS controller was posted by Balbir Singh[2]<br>
in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the<br>
RSS controller. At OLS, at the resource management BoF, everyone suggested<br>
that we handle both page cache and RSS together. Another request was raised<br>
to allow user space handling of OOM. The current memory controller is<br>
at version 6; it combines both mapped (RSS) and unmapped Page<br>
Cache Control [11].<br>
<br>
Are the jobs killed prematurely? If not, then you ran into the above.<br>
<br>
Kind regards.<br>
<span class="HOEnZb"><font color="#888888">— Andy<br>
</font></span></blockquote></div><br></div>