[slurm-users] Allocate more memory

david vilanova vilanew at gmail.com
Wed Feb 7 08:50:25 MST 2018


Thanks all for your comments, i will look into that

El El mié, 7 feb 2018 a las 16:37, Loris Bennett <loris.bennett at fu-berlin.de>
escribió:

>
> I was make the unwarranted assumption that you have multiple processes.
> So if you have a single process which needs more than 2GB, Ralph is of
> course right and there is nothing you can do.
>
> However, you are using R, so, depending on your problem, you may be able
> to make use of a package like Rmpi to allow your job to run on multiple
> nodes.
>
> Cheers,
>
> Loris
>
> "rhc at open-mpi.org" <rhc at open-mpi.org> writes:
>
> > Afraid not - since you don’t have any nodes that meet the 3G
> requirement, you’ll just hang.
> >
> >> On Feb 7, 2018, at 7:01 AM, david vilanova <vilanew at gmail.com> wrote:
> >>
> >> Thanks for the quick response.
> >>
> >> Should the following script do the trick ?? meaning use all required
> nodes to have at least 3G total memory ? even though my nodes were setup
> with 2G each ??
> >>
> >> #SBATCH array 1-10%10:1
> >>
> >> #SBATCH mem-per-cpu=3000m
> >>
> >> srun R CMD BATCH myscript.R
> >>
> >>
> >>
> >> thanks
> >>
> >>
> >>
> >>
> >> On 07/02/2018 15:50, Loris Bennett wrote:
> >>> Hi David,
> >>>
> >>> david martin <vilanew at gmail.com> writes:
> >>>
> >>>> 
> >>>>
> >>>> Hi,
> >>>>
> >>>> I would like to submit a job that requires 3Go. The problem is that I
> have 70 nodes available each node with 2Gb memory.
> >>>>
> >>>> So the command sbatch --mem=3G will wait for ressources to become
> available.
> >>>>
> >>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
> >>>> available or is that a particular setup ? meaning is the memory
> >>>> restricted to each node ? or should i allocate two nodes so that i
> >>>> have 2x4Go availble ?
> >>> Check
> >>>
> >>>   man sbatch
> >>>
> >>> You'll find that --mem means memory per node.  Thus, if you specify 3GB
> >>> but all the nodes have 2GB, your job will wait forever (or until you
> buy
> >>> more RAM and reconfigure Slurm).
> >>>
> >>> You probably want --mem-per-cpu, which is actually more like memory per
> >>> task.  This is obviously only going to work if your job can actually
> run
> >>> on more than one node, e.g. is MPI enabled.
> >>>
> >>> Cheers,
> >>>
> >>> Loris
> >>>
> >>
> >>
> --
> Dr. Loris Bennett (Mr.)
> ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180207/ff41015f/attachment.html>


More information about the slurm-users mailing list