<html><head></head><body dir="ltr"><div>Hi Martin,</div><div>I faced a similar problem where I had to deal with a huge taskfarm (1000s of tasks processing 1TB of satellite data) with varying run times and memory requirements. I ended up writing a REST server that hands out tasks to clients. I then simply fired up an array job where each job would request new tasks from the task server until either all tasks were processed or it was killed when it exceeded run time or memory. The system keeps track of completed tasks and running tasks so that you can reschedule tasks that didn't complete. Code is available on github and paper describing the service is here:</div><div><a href="https://openresearchsoftware.metajnl.com/articles/10.5334/jors.393/">https://openresearchsoftware.metajnl.com/articles/10.5334/jors.393/</a></div><div>Cheers</div><div>magnus</div><div><br></div><div><br></div><div>-----Original Message-----</div><div><b>From</b>: "Ohlerich, Martin" <<a href="mailto:%22Ohlerich,%20Martin%22%20%3cMartin.Ohlerich@lrz.de%3e">Martin.Ohlerich@lrz.de</a>></div><div><b>Reply-To</b>: Slurm User Community List <<a href="mailto:Slurm%20User%20Community%20List%20%3cslurm-users@lists.schedmd.com%3e">slurm-users@lists.schedmd.com</a>></div><div><b>To</b>: slurm-users@schedmd.com <<a href="mailto:%22slurm-users@schedmd.com%22%20%3cslurm-users@schedmd.com%3e">slurm-users@schedmd.com</a>>, Slurm User Community List <<a href="mailto:Slurm%20User%20Community%20List%20%3cslurm-users@lists.schedmd.com%3e">slurm-users@lists.schedmd.com</a>></div><div><b>Subject</b>: [ext] Re: [slurm-users] srun jobfarming hassle question</div><div><b>Date</b>: Wed, 18 Jan 2023 13:39:30 +0000</div><div><br></div><!-- text/html --><meta http-equiv="Content-Type" content="text/html; charset=utf-8"><style type="text/css" style="display:none;"><!-- P {margin-top:0;margin-bottom:0;} --></style><div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;" dir="ltr"><p>Hello Björn-Helge.<br></p><p><br></p><p>Sigh ... <br></p><p>First of all, of course, many thanks! This indeed helped a lot!</p><p><br></p><p>Two comments:<br></p><p>a) Why are Interfaces at Slurm tools changed? I once learned that the Interfaces must be designed to be as stable as possible. Otherwise, users get frustrated and go away.<br></p><p>b) This only works if I have to specify --mem for a task. Although manageable, I wonder why one needs to be that restrictive. In principle, in the use case outlined, one task could use a bit less memory, and the other may require a bit more the half of the node's available memory. (So clearly this isn't always predictable.) I only hope that in such cases the second task does not die from OOM ... (I will know soon, I guess.)<br></p><p><br></p><p>Really, thank you! Was a very helpful hint!</p><p>Cheers, Martin<br></p><br><br><div style="color: rgb(0, 0, 0);"><div><hr tabindex="-1" style="display:inline-block; width:98%"><div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt" face="Calibri, sans-serif" color="#000000"><b>Von:</b> slurm-users <slurm-users-bounces@lists.schedmd.com> im Auftrag von Bjørn-Helge Mevik <b.h.mevik@usit.uio.no><br><b>Gesendet:</b> Mittwoch, 18. Januar 2023 13:49<br><b>An:</b> slurm-users@schedmd.com<br><b>Betreff:</b> Re: [slurm-users] srun jobfarming hassle question</font><div> </div></div></div><font size="2"><span style="font-size:10pt;"><div class="PlainText">"Ohlerich, Martin" <Martin.Ohlerich@lrz.de> writes:<br><br>> Dear Colleagues,<br>><br>><br>> already for quite some years now are we again and again facing issues on our clusters with so-called job-farming (or task-farming) concepts in Slurm jobs using srun. And it bothers me that we can hardly help users with requests in this regard.<br>><br>><br>> From the documentation (<a href="https://slurm.schedmd.com/srun.html#SECTION_EXAMPLES">https://slurm.schedmd.com/srun.html#SECTION_EXAMPLES</a>), it reads like this.<br>><br>> -------------------------------------------><br>><br>> ...<br>><br>> #SBATCH --nodes=??<br>><br>> ...<br>><br>> srun -N 1 -n 2 ... prog1 &> log.1 &<br>><br>> srun -N 1 -n 1 ... prog2 &> log.2 &<br><br><br>Unfortunately, that part of the documentation is not quite up-to-date.<br>The semantics of srun has changed a little the last couple of<br>years/Slurm versions, so today, you have to use "srun --exact ...". From<br>"man srun" (version 21.08):<br><br> --exact<br> Allow a step access to only the resources requested for the<br> step. By default, all non-GRES resources on each node in<br> the step allocation will be used. This option only applies<br> to step allocations.<br> NOTE: Parallel steps will either be blocked or rejected<br> until requested step resources are available unless --over‐<br> lap is specified. Job resources can be held after the com‐<br> pletion of an srun command while Slurm does job cleanup.<br> Step epilogs and/or SPANK plugins can further delay the<br> release of step resources.<br><br></div></span></font></div></div><div><br></div><div><span><pre>-- <br></pre><div><b><span lang="DE">Magnus Hagdorn<o:p></o:p></span></b></div><div><span lang="DE">Charité – Universitätsmedizin Berlin<o:p></o:p></span></div><div><span lang="DE">Geschäftsbereich IT | Scientific Computing<o:p></o:p></span></div><div><span lang="DE"><o:p> </o:p></span></div><div><span style="color: black;">Campus Charité<span class="Apple-converted-space"> </span></span><span lang="DE" style="color: black;">Virchow Klinikum<o:p></o:p></span></div><div><span lang="DE" style="color: black;">Forum 4</span><span style="color: black;"><span class="Apple-converted-space"> </span>| Ebene 02 | Raum<span class="Apple-converted-space"> </span></span><span lang="DE" style="color: black;">2.020<o:p></o:p></span></div><div><span style="color: black;">Augustenburger Platz 1<o:p></o:p></span></div><div><span style="color: black;">13353 Berlin<o:p></o:p></span></div><div><span style="color: black;"><o:p> </o:p></span></div><div><span style="color: black;"><span style="color: rgb(5, 99, 193);">magnus.hagdorn@charite.de</span><o:p></o:p></span></div><div><span lang="DE" style="color: black;"><a href="https://www.charite.de/" title="Click to open https://www.charite.de/"><span lang="EN-US" style="color: rgb(5, 99, 193);">https://www.charite.de</span></a></span><span lang="EN-US" style="color: black;"><o:p></o:p></span></div><div><span lang="EN-US" style="color: black;">HPC Helpdesk:<span class="Apple-converted-space"> </span><a href="mailto:sc-hpc-helpdesk@charite.de" title="Click to mail sc-hpc-helpdesk@charite.de"><span style="color: rgb(5, 99, 193);">sc-hpc-helpdesk@charite.de</span></a></span></div></span></div></body></html>