[slurm-users] Allocate more memory

rhc at open-mpi.org rhc at open-mpi.org
Wed Feb 7 08:00:26 MST 2018


I’m afraid neither of those versions is going to solve the problem here - there is no way to allocate memory across nodes.

Simple reason: there is no way for a process to directly address memory on a separate node - you’d have to implement that via MPI or shmem or some other library.


> On Feb 7, 2018, at 6:57 AM, Loris Bennett <loris.bennett at fu-berlin.de> wrote:
> 
> Loris Bennett <loris.bennett at fu-berlin.de <mailto:loris.bennett at fu-berlin.de>> writes:
> 
>> Hi David,
>> 
>> david martin <vilanew at gmail.com> writes:
>> 
>>>  
>>> 
>>> Hi,
>>> 
>>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.
>>> 
>>> So the command sbatch --mem=3G will wait for ressources to become available.
>>> 
>>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
>>> available or is that a particular setup ? meaning is the memory
>>> restricted to each node ? or should i allocate two nodes so that i
>>> have 2x4Go availble ?
>> 
>> Check
>> 
>>  man sbatch
>> 
>> You'll find that --mem means memory per node.  Thus, if you specify 3GB
>> but all the nodes have 2GB, your job will wait forever (or until you buy
>> more RAM and reconfigure Slurm).
>> 
>> You probably want --mem-per-cpu, which is actually more like memory per
>> task.
> 
> The above should read
> 
>  You probably want --mem-per-cpu, which is actually more like memory per
>  core and thus memory per task if you have tasks per core set to 1.
> 
>> This is obviously only going to work if your job can actually run
>> on more than one node, e.g. is MPI enabled.
>> 
>> Cheers,
>> 
>> Loris
> -- 
> Dr. Loris Bennett (Mr.)
> ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de <mailto:loris.bennett at fu-berlin.de>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.schedmd.com/pipermail/slurm-users/attachments/20180207/122db21b/attachment-0001.html>


More information about the slurm-users mailing list