<html style="direction: ltr;">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<style id="bidiui-paragraph-margins" type="text/css">body p { margin-bottom: 0cm; margin-top: 0pt; } </style>
</head>
<body bidimailui-charset-is-forced="true" style="direction: ltr;">
<p>Not sure about automatically canceling a job array, except
perhaps by submitting 2 consecutive arrays - first of size 20, and
the other with the rest of the elements and a dependency of
afterok. That said, a single job in a job array in Slurm
documentation is referred to as a task. I personally prefer
element, as in array element.<br>
</p>
<p><br>
</p>
<p>Consider creating a batch job with:</p>
<p><br>
</p>
<p>arrayid=$(sbatch --parsable --array=0-19 array-job.sh)</p>
<p>sbatch --dependency=afterok:$arrayid --array=20-50000
array-job.sh</p>
<p><br>
</p>
<p>I'm not near a cluster right now, so can't test for correctness.
The main drawback is of course if 20 jobs takes a long time to
complete, and there are enough resources to run more than 20 jobs
in parallel, all those resources will be wasted for the duration.
Not a big issue in busy clusters, as some other job will run in
the meantime, but this will impact completion time of the array,
if 20 jobs use significantly less than the resources available.<br>
</p>
<p><br>
</p>
<p>It might be possible to depend on afternotok of the first 20
tasks, to run --wrap="scancel $arrayid"</p>
<p><br>
</p>
<p>Maybe something like:</p>
<p><br>
</p>
<p>sbatch --array=1-50000 array-job.sh</p>
<p>with<br>
</p>
<p>cat array-job.sh<br>
</p>
<p>
<blockquote type="cite">
<p>#!/bin/bash</p>
<p><br>
</p>
<p>srun myjob.sh $SLURM_ARRAY_TASK_ID &
</p>
<p>[[ $SLURM_ARRAY_TASK_ID -gt 20 ]] && srun -d
afternotok:${SLURM_ARRAY_JOB_ID}_1,afternotok:${SLURM_ARRAY_JOB_ID}_2,...afternotok:${SLURM_ARRAY_JOB_ID}_20<b>
</b>scancel $SLURM_ARRAY_JOB_ID<br>
</p>
<p><br>
</p>
</blockquote>
</p>
<p><br>
Will also work. Untested, use at your own risk.<br>
</p>
<p><br>
</p>
<p>The other OTHER approach might be to use some epilog (or possibly
epilogslurmctld) to log exit codes for first 20 tasks in each
array, and cancel the array if non-zero. This is a global approach
which will affect all job arrays, so might not be appropriate for
your use case.<br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 01/08/2023 16:48:47, Josef Dvoracek
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:d4433c52-2529-eb3f-0e39-3e1730bfb51e@fzu.cz">my users
found the beauty of job arrays, and they tend to use it every then
and now.
<br>
<br>
Sometimes human factor steps in, and something is wrong in job
array specification, and cluster "works" on one failed array job
after another.
<br>
<br>
Isn't there any way how to automatically stop/scancel/? job array
after, let say, 20 failed array jobs in row?
<br>
<br>
So far my experience is, if first ~20 array jobs go right, there
is no catastrophic failure in sbatch-file. If they fail, usually
it's bad and there is no sense to crunch the remaining thousands
of job array jobs.
<br>
<br>
OT: what is the correct terminology for one item in job array...
sub-job? job-array-job? :)
<br>
<br>
cheers
<br>
<br>
josef
<br>
<br>
<br>
</blockquote>
<pre class="moz-signature" cols="72">--
Regards,
--Dani_L.</pre>
</body>
</html>