<div dir="ltr"><div>Hello,</div><div><br></div><div>Jobs can be requeue if something wrong happens, and the node with failure excluded by the controller.<br></div><div><br></div><div><dt><b>--requeue</b></dt><dd>
Specifies that the batch job should eligible to being requeue.
The job may be requeued explicitly by a system administrator, after node
failure, or upon preemption by a higher priority job.
When a job is requeued, the batch script is initiated from its beginning.
Also see the <b>--no-requeue</b> option.
The <i>JobRequeue</i> configuration parameter controls the default
behavior on the cluster.</dd></div><div><br></div><div>Also, jobs can be run selecting a specific node or excluding nodes</div><div><br></div><div><dt><b>-w</b>, <b>--nodelist</b>=<<i>node name list</i>></dt><dd>
Request a specific list of hosts.
The job will contain <i>all</i> of these hosts and possibly additional hosts
as needed to satisfy resource requirements.
The list may be specified as a comma-separated list of hosts, a range of hosts
(host[1-5,7,...] for example), or a filename.
The host list will be assumed to be a filename if it contains a "/" character.
If you specify a minimum node or processor count larger than can be satisfied
by the supplied host list, additional resources will be allocated on other
nodes as needed.
Duplicate node names in the list will be ignored.
The order of the node names in the list is not important; the node names
will be sorted by Slurm.
</dd></div><div><br></div><div><dt><b>-x</b>, <b>--exclude</b>=<<i>node name list</i>></dt><dd>
Explicitly exclude certain nodes from the resources granted to the job.
</dd></div><div><br></div><div>does this help?</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">El jue., 4 jun. 2020 a las 16:03, Ransom, Geoffrey M. (<<a href="mailto:Geoffrey.Ransom@jhuapl.edu">Geoffrey.Ransom@jhuapl.edu</a>>) escribió:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div lang="EN-US">
<div class="gmail-m_4258007788409208335WordSection1">
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Hello<u></u><u></u></p>
<p class="MsoNormal"> We are moving from Univa(sge) to slurm and one of our users has jobs that if they detect a failure on the current machine they add that machine to their exclude list and requeue themselves. The user wants to emulate that behavior in
slurm.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">It seems like “scontrol update job ${SLURM_JOB_ID} ExcNodeList $NEWExcNodeList” won’t work on a running job, but it does work on a job pending in the queue. This means the job can’t do this step and requeue itself to avoid running on the
same host as before.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Our user wants his jobs to be able to exclude the current node and requeue itself.<u></u><u></u></p>
<p class="MsoNormal">Is there some way to accomplish this in slurm?<u></u><u></u></p>
<p class="MsoNormal">Is there a requeue counter of some sort so a job can see if it has requeued itself more than X times and give up?<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Thanks.<u></u><u></u></p>
</div>
</div>
</blockquote></div>