[slurm-users] Interlocking / Concurrent runner

Reuti reuti at staff.uni-marburg.de
Tue Oct 22 10:20:35 UTC 2019


I'm more used to GridEngine but use SLURM at remote locations. What I miss in SLURM are two features which I think are related to this area:

• use job names with wildcards for the various command like `scancel`

• use job names with wildcards for --dependency

If I get you right, you would like to submit:

$ sbatch  --job-name="runA-step1" …
$ sbatch  --job-name="runA-step2" --dependency="after:runA-step1" …
$ sbatch  --job-name="runB-step1" --dependency="after:runA*" …
$ sbatch  --job-name="runB-step2" --dependency="after:runB-step1" …

Some workflow using these dependencies could be set up without knowing any <jobid> beforehand then.

-- Reuti

> Am 22.10.2019 um 11:55 schrieb Florian Lohoff <f at zz.de>:
> On Tue, Oct 22, 2019 at 12:06:57PM +0300, mercan wrote:
>> Hi;
>> You can use the "--dependency=afterok:jobid:jobid ..." parameter of the
>> sbatch to ensure the new submitted job will be waiting until all older jobs
>> are finished. Simply, you can submit the new job even while older jobs are
>> running, the new job will not start before old jobs are finished.
> I am using than - But imaging a job collection of 20-40 Jobs. Running
> taking 2 1/2 hours. Then i'd like to run the same jobs again.
> So i would need to record all jobids of the last batch run 
> to start the next.
> Flo
> -- 
> Florian Lohoff                                                 f at zz.de
>        UTF-8 Test: The 🐈 ran after a 🐁, but the 🐁 ran away

More information about the slurm-users mailing list