[slurm-users] Drain node from TaskProlog / TaskEpilog

Brian Andrus toomuchit at gmail.com
Mon May 24 14:05:22 UTC 2021


Not sure I can understand how it can only be detected from inside the 
job environment for a failed node.

That description is more of "our application is behaving badly, but not 
so bad, the node quits responding." For that situation, your app or job 
should have something that it is doing to catch that and report it to 
slurm in some fashion (up to and including, kill the process).

Slurm polls the nodes and if slurmd does not respond, it will mark the 
node as failed. So slurmd must be responding.

If you can provide a better description of what symptoms you see that 
cause you to feel the node has failed, we can help a little more.

On 5/24/2021 3:02 AM, Mark Dixon wrote:
> Hi all,
>
> Sometimes our compute nodes get into a failed state which we can only 
> detect from inside the job environment.
>
> I can see that TaskProlog / TaskEpilog allows us to run our detection 
> test; however, unlike Epilog and Prolog, they do not drain a node if 
> they exit with a non-zero exit code.
>
> Does anyone have advice on automatically draining a node in this 
> situation, please?
>
> Best wishes,
>
> Mark
>



More information about the slurm-users mailing list