<div dir="ltr">have you seen this? <a href="https://bugs.schedmd.com/show_bug.cgi?id=7919#c7">https://bugs.schedmd.com/show_bug.cgi?id=7919#c7</a>, fixed in 20.06.1</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 19, 2021 at 11:34 AM Paul Brunk <<a href="mailto:pbrunk@uga.edu">pbrunk@uga.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all:<br>
<br>
(I hope plague and weather are being visibly less than maximally cruel<br>
to you all.)<br>
<br>
In short, I was trying to exempt a node from NVML Autodetect, and<br>
apparently introduced a syntax error in gres.conf. This is not an<br>
urgent matter for us now, but I'm curious what went wrong. Thanks for<br>
lending any eyes to this!<br>
<br>
More info:<br>
<br>
Slurm 20.02.6, CentOS 7.<br>
<br>
We've historically had only this in our gres.conf:<br>
AutoDetect=nvml<br>
<br>
Each of our GPU nodes has e.g. 'Gres=gpu:V100:1' as part of its<br>
NodeName entry (GPU models vary across them).<br>
<br>
I wanted to exempt one GPU node from the autodetect (was curious about<br>
the presence or absence of the GPU model subtype designation,<br>
e.g. 'V100' vs. 'v100s'), so I changed gres.conf to this (modelled<br>
after 'gres.conf' man page):<br>
<br>
AutoDetect=nvml<br>
NodeName=a1-10 AutoDetect=off Name=gpu File=/dev/nvidia0<br>
<br>
I restarted slurmctld, then "scontrol reconfigure". Each node got a<br>
fatal error parsing gres.conf, causing RPC failure between slurmctld<br>
and nodes, causing slurmctld to consider the nodes failed.<br>
<br>
Here's how it looked to slurmctld:<br>
<br>
[2021-02-04T13:36:30.482] backfill: Started JobId=1469772_3(1473148) in batch on ra3-6<br>
[2021-02-04T15:14:48.642] error: Node ra3-6 appears to have a different slurm.conf than the slurmctld. This could cause issues with communication and functionality. Please review both files and make sure they are the same. If this is expected ignore, and set DebugFlags=NO_CONF_HASH in your slurm.conf.<br>
[2021-02-04T15:25:40.258] agent/is_node_resp: node:ra3-6 RPC:REQUEST_PING : Communication connection failure<br>
[2021-02-04T15:39:49.046] requeue job JobId=1443912 due to failure of node ra3-6<br>
<br>
And to the slurmd's :<br>
<br>
[2021-02-04T15:14:50.730] Message aggregation disabled<br>
[2021-02-04T15:14:50.742] error: Parsing error at unrecognized key: AutoDetect<br>
[2021-02-04T15:14:50.742] error: Parse error in file /var/lib/slurmd/conf-cache/gres.conf line 2: " AutoDetect=off Name=gpu File=/dev/nvidia0"<br>
[2021-02-04T15:14:50.742] fatal: error opening/reading /var/lib/slurmd/conf-cache/gres.conf<br>
<br>
Reverting to the original, one-line gres.conf reverted the cluster to production state.<br>
<br>
-- <br>
Paul Brunk, system administrator<br>
Georgia Advanced Computing Resource Center<br>
Enterprise IT Svcs, the University of Georgia<br>
<br>
<br>
</blockquote></div>