<div dir="ltr"><div><div><div><div><div>I have discovered that if I launch slurmd with the "-b" flag, then my jobs run successfully. However, this flag is documented as "Report node rebooted when daemon restarted. Used for testing purposes.". I'd rather not rely on something that is supposed to be only for testing purposes but perhaps this gives a clue as to what is going wrong.<br></div></div><br></div><div>Another possible clue is that, without the above flag, slurmctld will wait for ResumeTimeout seconds, report that the node is down, then immediately report that the node is up and . The booted slurmd note will also receive instruction to terminate the job that never got allocated to it.<br><br>slurmctld: node foo not resumed by ResumeTimeout(60) - marking down and power_save<br>slurmctld: Killing JobId=2 on failed node foo<br>slurmctld: Node foo now responding<br>slurmctld: node_did_resp: node foo returned to service</div><div><br></div>I've also tried:<br><br></div>* Increasing the resume and slurmd timeouts so they are very long (but the slurmd is easily coming up within these limits). This has no impact.<br></div><div>* Swapping the order in which I boot slurmd and call scontrol update. This has no impact.<br></div><div>* Setting the state to Resume via scontrol update. This gives me an invalid state transition error from ALLOCATION to RESUME<br></div><div>* Setting the hostname of the node via scontrol update because the node hostname doesn't match the nodename and I have placed the nodename as an alias in /etc/hosts on the slurmd node. This has no impact.<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 24 Oct 2020 at 23:01, Rupert Madden-Abbott <<a href="mailto:rupert.madden.abbott@gmail.com">rupert.madden.abbott@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hi,</div><div><br></div><div>I'm using Slurm's elastic compute functionality to spin up nodes in the cloud, alongside a controller which is also in the cloud.</div><div><br></div><div>When executing a job, Slurm correctly places a node into the state "alloc#" and calls my resume program. My resume program successfully provisions the cloud node and slurmd comes up without a problem.</div><div><br></div><div>My resume program then retrieves the ip address of my cloud node and updates the controller as follows:<br><br></div><div>scontrol update nodename=foo nodeaddr=bar</div><div><br></div><div>And then nothing happens! The node remains in the state "alloc#" until the ResumeTimeout is reached at which point the controller gives up.<br><br>I'm fairly confident that slurmd is able to talk to the controller because if I specify an incorrect hostname for the controller in my slurm.conf, then slurmd immediately errors on startup and exits with a message saying something like "unable to contact controller"<br><br></div><div>What am I missing?<br><br></div><div>Thanks very much in advance if anybody has any ideas!<br></div></div>
</blockquote></div>