[slurm-users] Allocate more memory

Loris Bennett loris.bennett at fu-berlin.de
Wed Feb 7 07:57:18 MST 2018


Loris Bennett <loris.bennett at fu-berlin.de> writes:

> Hi David,
>
> david martin <vilanew at gmail.com> writes:
>
>>  
>>
>> Hi,
>>
>> I would like to submit a job that requires 3Go. The problem is that I have 70 nodes available each node with 2Gb memory.
>>
>> So the command sbatch --mem=3G will wait for ressources to become available.
>>
>> Can I run sbatch and tell the cluster to use the 3Go out of the 70Go
>> available or is that a particular setup ? meaning is the memory
>> restricted to each node ? or should i allocate two nodes so that i
>> have 2x4Go availble ?
>
> Check
>
>   man sbatch
>
> You'll find that --mem means memory per node.  Thus, if you specify 3GB
> but all the nodes have 2GB, your job will wait forever (or until you buy
> more RAM and reconfigure Slurm).
>
> You probably want --mem-per-cpu, which is actually more like memory per
> task.

The above should read

  You probably want --mem-per-cpu, which is actually more like memory per
  core and thus memory per task if you have tasks per core set to 1.

> This is obviously only going to work if your job can actually run
> on more than one node, e.g. is MPI enabled.
>
> Cheers,
>
> Loris
-- 
Dr. Loris Bennett (Mr.)
ZEDAT, Freie Universität Berlin         Email loris.bennett at fu-berlin.de



More information about the slurm-users mailing list