[slurm-users] Job Step Resource Requests are Ignored

Maria Semple maria at rstudio.com
Wed May 6 06:00:27 UTC 2020


Hi Chris,

Thanks for the tip about the memory units, I'll double check that I'm using
them.
Is there no way to achieve what I want then? I'd like the first and last
job steps to always be able to run, even if the second step needs too many
resources (based on the cluster).

As a side note, do you know why it's not even possible to restrict the
number of resources a single step uses (i.e. set less CPUs than are
available to the full job)?

Thanks,
Maria

On Tue, May 5, 2020 at 10:27 PM Chris Samuel <chris at csamuel.org> wrote:

> On Tuesday, 5 May 2020 4:47:12 PM PDT Maria Semple wrote:
>
> > I'd like to set different resource limits for different steps of my job.
> A
> > sample script might look like this (e.g. job.sh):
> >
> > #!/bin/bash
> > srun --cpus-per-task=1 --mem=1 echo "Starting..."
> > srun --cpus-per-task=4 --mem=250 --exclusive <do something complicated>
> > srun --cpus-per-task=1 --mem=1 echo "Finished."
> >
> > Then I would run the script from the command line using the following
> > command: sbatch --ntasks=1 job.sh.
>
> You shouldn't ask for more resources with "srun" than have been allocated
> with
> "sbatch" - so if you want the job to be able to use up to 4 cores at once
> &
> that amount of memory you'll need to use:
>
> sbatch -c 4 --mem=250 --ntasks=1 job.sh
>
> I'd also suggest using suffixes for memory to disambiguate the values.
>
> All the best,
> Chris
> --
>   Chris Samuel  :  http://www.csamuel.org/  :  Berkeley, CA, USA
>
>
>
>
>

-- 
Thanks,
Maria
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20200505/33eb23d0/attachment.htm>


More information about the slurm-users mailing list