[slurm-users] requesting resources and afterwards launch an array of calculations

Alfredo Quevedo maquevedo.unc at gmail.com
Wed Dec 19 12:50:15 MST 2018


thank you very much Carlos for the info,
regards
Alfredo

⁣Enviado desde BlueMail ​

En 19 de diciembre de 2018 13:36, en 13:36, Carlos Fenoy <minibit at gmail.com> escribió:
>Hi Alfredo,
>
>You can have a look at using https://github.com/eth-cscs/GREASY . It
>was
>developed before array-jobs were supported in slurm and it will do
>exactly
>what you want.
>
>Regards,
>Carlos
>
>On Wed, Dec 19, 2018 at 3:33 PM Alfredo Quevedo
><maquevedo.unc at gmail.com>
>wrote:
>
>> thank you Michael for the feedback, my scenario is the following: I
>want
>> to run a job array of (lets say) 30 jobs. So I setted the slurm input
>as
>> follows:
>>
>> #SBATCH --array=1-104%30
>> #SBATCH --ntasks=1
>>
>> however only 4 jobs within the array are launched at a time due to
>the
>> allowed max number of jobs as setted in the slurm configuration (4).
>As
>> a workaround to the issued, the sysadmin suggested me to request the
>> resources, and afterwards distribute the resources asigned into a
>> multiple set of single CPU task. I believe that with the solution you
>> mentioned only 30 (out of the 104) jobs will be finished?
>>
>> thanks
>>
>> Alfredo
>>
>>
>> El 19/12/2018 a las 11:15, Renfro, Michael escribió:
>> > Literal job arrays are built into Slurm:
>> https://slurm.schedmd.com/job_array.html
>> >
>> > Alternatively, if you wanted to allocate a set of CPUs for a
>parallel
>> task, and then run a set of single-CPU tasks in the same job,
>something
>> like:
>> >
>> >    #!/bin/bash
>> >    #SBATCH --ntasks=30
>> >    srun --ntasks=${SLURM_NTASKS} hostname
>> >
>> > is one way of doing it. If that’s not what you’re looking for, some
>> other details would be needed.
>> >
>>
>>
>
>-- 
>--
>Carles Fenoy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181219/4d5498a4/attachment.html>


More information about the slurm-users mailing list