[slurm-users] requesting resources and afterwards launch an array of calculations

Alfredo Quevedo maquevedo.unc at gmail.com
Wed Dec 19 22:00:07 MST 2018


Thank you Sam for this example. I will try to apply this procedure to my study case,
regards

Alfredo

⁣Enviado desde BlueMail ​

En 20 de diciembre de 2018 00:11, en 00:11, Sam Hawarden <sam.hawarden at otago.ac.nz> escribió:
>Hi  Alfredo?,
>
>
>Beyond what is already suggested, I have used the following script to
>run a set number of jobs simultaneously within a batch script.
>
>
>for thing in ${arrayOfThings[@]}; do echo $thing; done | (
>srun -J JobName xargs -I{} --max-procs ${SLURM_JOB_CPUS_PER_NODE} bash
>-c '{
>someCommand -args -and -options {}
>}'
>)
>
>
>Where {} is $thing
>
>
>In operation
>here?<https://github.com/shawarden/Fastq2VCF/blob/master/slurm-scripts/blockalign.sl#L215>
>
>
>Cheers,
>
>  Sam
>
>________________________________
>Sam Hawarden
>Assistant Research Fellow
>Pathology Department
>Dunedin School of Medicine
>sam.hawarden(at)otago.ac.nz
>________________________________
>From: slurm-users <slurm-users-bounces at lists.schedmd.com> on behalf of
>Carlos Fenoy <minibit at gmail.com>
>Sent: Thursday, 20 December 2018 04:59
>To: Slurm User Community List
>Subject: Re: [slurm-users] requesting resources and afterwards launch
>an array of calculations
>
>Hi Alfredo,
>
>You can have a look at using https://github.com/eth-cscs/GREASY . It
>was developed before array-jobs were supported in slurm and it will do
>exactly what you want.
>
>Regards,
>Carlos
>
>On Wed, Dec 19, 2018 at 3:33 PM Alfredo Quevedo
><maquevedo.unc at gmail.com<mailto:maquevedo.unc at gmail.com>> wrote:
>thank you Michael for the feedback, my scenario is the following: I
>want
>to run a job array of (lets say) 30 jobs. So I setted the slurm input
>as
>follows:
>
>#SBATCH --array=1-104%30
>#SBATCH --ntasks=1
>
>however only 4 jobs within the array are launched at a time due to the
>allowed max number of jobs as setted in the slurm configuration (4). As
>a workaround to the issued, the sysadmin suggested me to request the
>resources, and afterwards distribute the resources asigned into a
>multiple set of single CPU task. I believe that with the solution you
>mentioned only 30 (out of the 104) jobs will be finished?
>
>thanks
>
>Alfredo
>
>
>El 19/12/2018 a las 11:15, Renfro, Michael escribió:
>> Literal job arrays are built into Slurm:
>https://slurm.schedmd.com/job_array.html
>>
>> Alternatively, if you wanted to allocate a set of CPUs for a parallel
>task, and then run a set of single-CPU tasks in the same job, something
>like:
>>
>>    #!/bin/bash
>>    #SBATCH --ntasks=30
>>    srun --ntasks=${SLURM_NTASKS} hostname
>>
>> is one way of doing it. If that's not what you're looking for, some
>other details would be needed.
>>
>
>
>
>--
>--
>Carles Fenoy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20181220/dbf072e2/attachment-0001.html>


More information about the slurm-users mailing list