[slurm-users] scontrol reboot does not allow new jobs to be scheduled if nextstate=RESUME is set

Christopher Samuel chris at csamuel.org
Wed Oct 25 00:10:51 UTC 2023


On 10/24/23 12:39, Tim Schneider wrote:

> Now my issue is that when I run "scontrol reboot ASAP nextstate=RESUME 
> <node>", the node goes in "mix@" state (not drain), but no new jobs get 
> scheduled until the node reboots. Essentially I get draining behavior, 
> even though the node's state is not "drain". Note that this behavior is 
> caused by "nextstate=RESUME"; if I leave that away, jobs get scheduled 
> as expected. Does anyone have an idea why that could be?

The intent of the "ASAP` flag for "scontrol reboot" is to not let any 
more jobs onto a node until it has rebooted.

IIRC that was from work we sponsored, the idea being that (for how our 
nodes are managed) we would build new images with the latest software 
stack, test them on a separate test system and then once happy bring 
them over to the production system and do an "scontrol reboot ASAP 
nextstate=resume reason=... $NODES" to ensure that from that point 
onwards no new jobs would start in the old software configuration, only 
the new one.

Also slurmctld would know that these nodes are due to come back in 
"ResumeTimeout" seconds after the reboot is issued and so could plan for 
them as part of scheduling large jobs, rather than thinking there was no 
way it could do so and letting lots of smaller jobs get in the way.

Hope that helps!

All the best,
Chris
-- 
Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA




More information about the slurm-users mailing list