<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Gothic";
panose-1:2 11 6 9 7 2 5 8 2 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"\@MS Gothic";
panose-1:2 11 6 9 7 2 5 8 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p
{mso-style-priority:99;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
p.msonormal0, li.msonormal0, div.msonormal0
{mso-style-name:msonormal;
mso-margin-top-alt:auto;
margin-right:0in;
mso-margin-bottom-alt:auto;
margin-left:0in;
font-size:12.0pt;
font-family:"Times New Roman",serif;}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri",sans-serif;
color:black;}
span.EmailStyle20
{mso-style-type:personal-reply;
font-family:"Calibri",sans-serif;
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="WordSection1">
<p class="MsoNormal"><span style="color:#1F497D">Thanks for the input guys!<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">We don’t even use lustre filesystems…and It doesn’t appear to be I/O.<br>
<br>
I execute <b>iostat</b> on both head node and compute node when the job is in CG status and the %iowait value is 0.00 or 0.01
<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D">$ iostat<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D">Linux 3.10.0-957.el7.x86_64 (node002) 07/22/2020 _x86_64_ (32 CPU)<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D">avg-cpu: %user %nice %system %iowait %steal %idle<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D"> 0.01 0.00 0.01 0.00 0.00 99.98<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D">Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:8.0pt;font-family:"Courier New";color:#1F497D">sda 0.82 14.09 2.39 1157160 196648<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">Also tried the following command to see if I can identify any processes in D state on the compute node but no results:<o:p></o:p></span></p>
<p class="MsoNormal"><span style="font-size:9.0pt;font-family:"Courier New";color:#1F497D">ps aux | awk '$8 ~ /D/ { print $0 }'<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">This ones got me stumped…<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">Sorry I’m not too familiar with epilog yet; do you have any examples of how I would use that to log the SIGKILL event ?<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D">Thanks again,<br>
Ivan<o:p></o:p></span></p>
<p class="MsoNormal"><span style="color:#1F497D"><o:p> </o:p></span></p>
<div>
<div style="border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b>From:</b> slurm-users <slurm-users-bounces@lists.schedmd.com>
<b>On Behalf Of </b>Paul Edmon<br>
<b>Sent:</b> Thursday, July 23, 2020 7:19 AM<br>
<b>To:</b> slurm-users@lists.schedmd.com<br>
<b>Subject:</b> Re: [slurm-users] Nodes going into drain because of "Kill task failed"<o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<p>Same here. Whenever we see rashes of Kill task failed it is invariably symptomatic of one of our Lustre filesystems acting up or being saturated.<span style="font-size:12.0pt"><o:p></o:p></span></p>
<p>-Paul Edmon-<o:p></o:p></p>
<div>
<p class="MsoNormal">On 7/22/2020 3:21 PM, Ryan Cox wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">Angelos,<br>
<br>
I'm glad you mentioned UnkillableStepProgram. We meant to look at that a while ago but forgot about it. That will be very useful for us as well, though the answer for us is pretty much always Lustre problems.<br>
<br>
Ryan<o:p></o:p></p>
<div>
<p class="MsoNormal">On 7/22/20 1:02 PM, Angelos Ching wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">Agreed. You may also want to write a script that gather the list of program in "D state" (kernel wait) and print their stack; and configure it as UnkillableStepProgram so that you can capture the program and relevant system callS that caused
the job to become unkillable / timed out exiting for further troubleshooting. <o:p>
</o:p></p>
<div>
<p class="MsoNormal"><br>
Regards,<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal">Angelos<o:p></o:p></p>
<div>
<p class="MsoNormal">(Sent from mobile, please pardon me for typos and cursoriness.)<o:p></o:p></p>
</div>
<div>
<p class="MsoNormal"><br>
<br>
<o:p></o:p></p>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal" style="margin-bottom:12.0pt">2020/07/23 0:41<span style="font-family:"MS Gothic"">、</span>Ryan Cox
<a href="mailto:ryan_cox@byu.edu"><ryan_cox@byu.edu></a><span style="font-family:"MS Gothic"">のメール</span>:<o:p></o:p></p>
</blockquote>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"> Ivan,<br>
<br>
Are you having I/O slowness? That is the most common cause for us. If it's not that, you'll want to look through all the reasons that it takes a long time for a process to actually die after a SIGKILL because one of those is the likely cause. Typically it's
because the process is waiting for an I/O syscall to return. Sometimes swap death is the culprit, but usually not at the scale that you stated. Maybe you could try reproducing the issue manually or putting something in epilog the see the state of the processes
in the job's cgroup.<br>
<br>
Ryan<o:p></o:p></p>
<div>
<p class="MsoNormal">On 7/22/20 10:24 AM, Ivan Kovanda wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal"><span style="color:black">Dear slurm community,</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">Currently running slurm version 18.08.4</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">We have been experiencing an issue causing any nodes a slurm job was submitted to to "drain".</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">From what I've seen, it appears that there is a problem with how slurm is cleaning up the job with the SIGKILL process.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">I've found this slurm article (<a href="https://urldefense.com/v3/__https:/slurm.schedmd.com/troubleshoot.html*completing__;Iw!!NCZxaNi9jForCP_SxBKJCA!FOsRehxg6w3PLipsOItVBSjYhPtRzmQnBUQen6C13v85kgef1cZFdtwuP9zG1sgAEQ$">https://slurm.schedmd.com/troubleshoot.html#completing</a>)
, which has a section titled "Jobs and nodes are stuck in COMPLETING state", where it recommends increasing the "UnkillableStepTimeout" in the slurm.conf , but all that has done is prolong the time it takes for the job to timeout.
</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">The default time for the "UnkillableStepTimeout" is 60 seconds.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">After the job completes, it stays in the CG (completing) status for the 60 seconds, then the nodes the job was submitted to go to drain status.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">On the headnode running slurmctld, I am seeing this in the log - /var/log/slurmctld:</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">--------------------------------------------------------------------------------------------------------------------------------------------</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:40:03.000] update_node: node node001 reason set to: Kill task failed</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:40:03.001] update_node: node node001 state set to DRAINING</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">On the compute node, I am seeing this in the log - /var/log/slurmd</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">--------------------------------------------------------------------------------------------------------------------------------------------</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:38:33.110] [1485.batch] done with job</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:38:33.110] [1485.extern] Sent signal 18 to 1485.4294967295</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:38:33.111] [1485.extern] Sent signal 15 to 1485.4294967295</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:39:02.820] [1485.extern] Sent SIGKILL signal to 1485.4294967295</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">[2020-07-21T22:40:03.000] [1485.extern] error: *** EXTERN STEP FOR 1485 STEPD TERMINATED ON node001 AT 2020-07-21T22:40:02 DUE TO JOB NOT ENDING WITH SIGNALS ***</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">I've tried restarting the SLURMD daemon on the compute nodes, and even completing rebooting a few computes nodes (node001, node002) .
</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">From what I've seen were experiencing this on all nodes in the cluster.
</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">I've yet to restart the headnode because there are still active jobs on the system so I don't want to interrupt those.</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black"> </span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">Thank you for your time,</span><o:p></o:p></p>
<p class="MsoNormal"><span style="color:black">Ivan</span><o:p></o:p></p>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
</blockquote>
</div>
</blockquote>
</div>
</blockquote>
<p class="MsoNormal"><span style="font-size:12.0pt;font-family:"Times New Roman",serif"><o:p> </o:p></span></p>
</blockquote>
</div>
</body>
</html>